Nov 22 07:01:28 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 22 07:01:28 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 22 07:01:28 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 07:01:28 localhost kernel: BIOS-provided physical RAM map:
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 22 07:01:28 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 22 07:01:28 localhost kernel: NX (Execute Disable) protection: active
Nov 22 07:01:28 localhost kernel: APIC: Static calls initialized
Nov 22 07:01:28 localhost kernel: SMBIOS 2.8 present.
Nov 22 07:01:28 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 22 07:01:28 localhost kernel: Hypervisor detected: KVM
Nov 22 07:01:28 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 22 07:01:28 localhost kernel: kvm-clock: using sched offset of 6970527925 cycles
Nov 22 07:01:28 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 22 07:01:28 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 22 07:01:28 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 22 07:01:28 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 22 07:01:28 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 22 07:01:28 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 22 07:01:28 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 22 07:01:28 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 22 07:01:28 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 22 07:01:28 localhost kernel: Using GB pages for direct mapping
Nov 22 07:01:28 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 22 07:01:28 localhost kernel: ACPI: Early table checksum verification disabled
Nov 22 07:01:28 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 22 07:01:28 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:01:28 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:01:28 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:01:28 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 22 07:01:28 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:01:28 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 07:01:28 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 22 07:01:28 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 22 07:01:28 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 22 07:01:28 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 22 07:01:28 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 22 07:01:28 localhost kernel: No NUMA configuration found
Nov 22 07:01:28 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 22 07:01:28 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 22 07:01:28 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 22 07:01:28 localhost kernel: Zone ranges:
Nov 22 07:01:28 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 22 07:01:28 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 22 07:01:28 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 07:01:28 localhost kernel:   Device   empty
Nov 22 07:01:28 localhost kernel: Movable zone start for each node
Nov 22 07:01:28 localhost kernel: Early memory node ranges
Nov 22 07:01:28 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 22 07:01:28 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 22 07:01:28 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 07:01:28 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 22 07:01:28 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 22 07:01:28 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 22 07:01:28 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 22 07:01:28 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 22 07:01:28 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 22 07:01:28 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 22 07:01:28 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 22 07:01:28 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 22 07:01:28 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 22 07:01:28 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 22 07:01:28 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 22 07:01:28 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 22 07:01:28 localhost kernel: TSC deadline timer available
Nov 22 07:01:28 localhost kernel: CPU topo: Max. logical packages:   8
Nov 22 07:01:28 localhost kernel: CPU topo: Max. logical dies:       8
Nov 22 07:01:28 localhost kernel: CPU topo: Max. dies per package:   1
Nov 22 07:01:28 localhost kernel: CPU topo: Max. threads per core:   1
Nov 22 07:01:28 localhost kernel: CPU topo: Num. cores per package:     1
Nov 22 07:01:28 localhost kernel: CPU topo: Num. threads per package:   1
Nov 22 07:01:28 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 22 07:01:28 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 22 07:01:28 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 22 07:01:28 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 22 07:01:28 localhost kernel: Booting paravirtualized kernel on KVM
Nov 22 07:01:28 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 22 07:01:28 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 22 07:01:28 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 22 07:01:28 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 22 07:01:28 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 22 07:01:28 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 22 07:01:28 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 07:01:28 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 22 07:01:28 localhost kernel: random: crng init done
Nov 22 07:01:28 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 22 07:01:28 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 22 07:01:28 localhost kernel: Fallback order for Node 0: 0 
Nov 22 07:01:28 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 22 07:01:28 localhost kernel: Policy zone: Normal
Nov 22 07:01:28 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 22 07:01:28 localhost kernel: software IO TLB: area num 8.
Nov 22 07:01:28 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 22 07:01:28 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 22 07:01:28 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 22 07:01:28 localhost kernel: Dynamic Preempt: voluntary
Nov 22 07:01:28 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 22 07:01:28 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 22 07:01:28 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 22 07:01:28 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 22 07:01:28 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 22 07:01:28 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 22 07:01:28 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 22 07:01:28 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 22 07:01:28 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 07:01:28 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 07:01:28 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 07:01:28 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 22 07:01:28 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 22 07:01:28 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 22 07:01:28 localhost kernel: Console: colour VGA+ 80x25
Nov 22 07:01:28 localhost kernel: printk: console [ttyS0] enabled
Nov 22 07:01:28 localhost kernel: ACPI: Core revision 20230331
Nov 22 07:01:28 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 22 07:01:28 localhost kernel: x2apic enabled
Nov 22 07:01:28 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 22 07:01:28 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 22 07:01:28 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 22 07:01:28 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 22 07:01:28 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 22 07:01:28 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 22 07:01:28 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 22 07:01:28 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 22 07:01:28 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 22 07:01:28 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 22 07:01:28 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 22 07:01:28 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 22 07:01:28 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 22 07:01:28 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 22 07:01:28 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 22 07:01:28 localhost kernel: x86/bugs: return thunk changed
Nov 22 07:01:28 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 22 07:01:28 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 22 07:01:28 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 22 07:01:28 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 22 07:01:28 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 22 07:01:28 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 22 07:01:28 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 22 07:01:28 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 22 07:01:28 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 22 07:01:28 localhost kernel: landlock: Up and running.
Nov 22 07:01:28 localhost kernel: Yama: becoming mindful.
Nov 22 07:01:28 localhost kernel: SELinux:  Initializing.
Nov 22 07:01:28 localhost kernel: LSM support for eBPF active
Nov 22 07:01:28 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 07:01:28 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 07:01:28 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 22 07:01:28 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 22 07:01:28 localhost kernel: ... version:                0
Nov 22 07:01:28 localhost kernel: ... bit width:              48
Nov 22 07:01:28 localhost kernel: ... generic registers:      6
Nov 22 07:01:28 localhost kernel: ... value mask:             0000ffffffffffff
Nov 22 07:01:28 localhost kernel: ... max period:             00007fffffffffff
Nov 22 07:01:28 localhost kernel: ... fixed-purpose events:   0
Nov 22 07:01:28 localhost kernel: ... event mask:             000000000000003f
Nov 22 07:01:28 localhost kernel: signal: max sigframe size: 1776
Nov 22 07:01:28 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 22 07:01:28 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 22 07:01:28 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 22 07:01:28 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 22 07:01:28 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 22 07:01:28 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 22 07:01:28 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 22 07:01:28 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 22 07:01:28 localhost kernel: Memory: 7765840K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616280K reserved, 0K cma-reserved)
Nov 22 07:01:28 localhost kernel: devtmpfs: initialized
Nov 22 07:01:28 localhost kernel: x86/mm: Memory block size: 128MB
Nov 22 07:01:28 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 22 07:01:28 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 22 07:01:28 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 22 07:01:28 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 22 07:01:28 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 22 07:01:28 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 22 07:01:28 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 22 07:01:28 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 22 07:01:28 localhost kernel: audit: type=2000 audit(1763794886.799:1): state=initialized audit_enabled=0 res=1
Nov 22 07:01:28 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 22 07:01:28 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 22 07:01:28 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 22 07:01:28 localhost kernel: cpuidle: using governor menu
Nov 22 07:01:28 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 22 07:01:28 localhost kernel: PCI: Using configuration type 1 for base access
Nov 22 07:01:28 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 22 07:01:28 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 22 07:01:28 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 22 07:01:28 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 22 07:01:28 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 22 07:01:28 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 22 07:01:28 localhost kernel: Demotion targets for Node 0: null
Nov 22 07:01:28 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 22 07:01:28 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 22 07:01:28 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 22 07:01:28 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 22 07:01:28 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 22 07:01:28 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 22 07:01:28 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 22 07:01:28 localhost kernel: ACPI: Interpreter enabled
Nov 22 07:01:28 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 22 07:01:28 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 22 07:01:28 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 22 07:01:28 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 22 07:01:28 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 22 07:01:28 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 22 07:01:28 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [3] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [4] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [5] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [6] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [7] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [8] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [9] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [10] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [11] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [12] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [13] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [14] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [15] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [16] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [17] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [18] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [19] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [20] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [21] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [22] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [23] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [24] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [25] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [26] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [27] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [28] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [29] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [30] registered
Nov 22 07:01:28 localhost kernel: acpiphp: Slot [31] registered
Nov 22 07:01:28 localhost kernel: PCI host bridge to bus 0000:00
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 22 07:01:28 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 22 07:01:28 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 22 07:01:28 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 22 07:01:28 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 22 07:01:28 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 22 07:01:28 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 22 07:01:28 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 22 07:01:28 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 22 07:01:28 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 22 07:01:28 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 22 07:01:28 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 07:01:28 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 22 07:01:28 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 22 07:01:28 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 22 07:01:28 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 22 07:01:28 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 22 07:01:28 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 22 07:01:28 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 22 07:01:28 localhost kernel: iommu: Default domain type: Translated
Nov 22 07:01:28 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 22 07:01:28 localhost kernel: SCSI subsystem initialized
Nov 22 07:01:28 localhost kernel: ACPI: bus type USB registered
Nov 22 07:01:28 localhost kernel: usbcore: registered new interface driver usbfs
Nov 22 07:01:28 localhost kernel: usbcore: registered new interface driver hub
Nov 22 07:01:28 localhost kernel: usbcore: registered new device driver usb
Nov 22 07:01:28 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 22 07:01:28 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 22 07:01:28 localhost kernel: PTP clock support registered
Nov 22 07:01:28 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 22 07:01:28 localhost kernel: NetLabel: Initializing
Nov 22 07:01:28 localhost kernel: NetLabel:  domain hash size = 128
Nov 22 07:01:28 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 22 07:01:28 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 22 07:01:28 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 22 07:01:28 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 22 07:01:28 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 22 07:01:28 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 22 07:01:28 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 22 07:01:28 localhost kernel: vgaarb: loaded
Nov 22 07:01:28 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 22 07:01:28 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 22 07:01:28 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 22 07:01:28 localhost kernel: pnp: PnP ACPI init
Nov 22 07:01:28 localhost kernel: pnp 00:03: [dma 2]
Nov 22 07:01:28 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 22 07:01:28 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 22 07:01:28 localhost kernel: NET: Registered PF_INET protocol family
Nov 22 07:01:28 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 22 07:01:28 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 22 07:01:28 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 22 07:01:28 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 22 07:01:28 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 22 07:01:28 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 22 07:01:28 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 22 07:01:28 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 07:01:28 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 07:01:28 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 22 07:01:28 localhost kernel: NET: Registered PF_XDP protocol family
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 22 07:01:28 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 22 07:01:28 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 22 07:01:28 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 22 07:01:28 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 86428 usecs
Nov 22 07:01:28 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 22 07:01:28 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 22 07:01:28 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 22 07:01:28 localhost kernel: ACPI: bus type thunderbolt registered
Nov 22 07:01:28 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 22 07:01:28 localhost kernel: Initialise system trusted keyrings
Nov 22 07:01:28 localhost kernel: Key type blacklist registered
Nov 22 07:01:28 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 22 07:01:28 localhost kernel: zbud: loaded
Nov 22 07:01:28 localhost kernel: integrity: Platform Keyring initialized
Nov 22 07:01:28 localhost kernel: integrity: Machine keyring initialized
Nov 22 07:01:28 localhost kernel: Freeing initrd memory: 85868K
Nov 22 07:01:28 localhost kernel: NET: Registered PF_ALG protocol family
Nov 22 07:01:28 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 22 07:01:28 localhost kernel: Key type asymmetric registered
Nov 22 07:01:28 localhost kernel: Asymmetric key parser 'x509' registered
Nov 22 07:01:28 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 22 07:01:28 localhost kernel: io scheduler mq-deadline registered
Nov 22 07:01:28 localhost kernel: io scheduler kyber registered
Nov 22 07:01:28 localhost kernel: io scheduler bfq registered
Nov 22 07:01:28 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 22 07:01:28 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 22 07:01:28 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 22 07:01:28 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 22 07:01:28 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 22 07:01:28 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 22 07:01:28 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 22 07:01:28 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 22 07:01:28 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 22 07:01:28 localhost kernel: Non-volatile memory driver v1.3
Nov 22 07:01:28 localhost kernel: rdac: device handler registered
Nov 22 07:01:28 localhost kernel: hp_sw: device handler registered
Nov 22 07:01:28 localhost kernel: emc: device handler registered
Nov 22 07:01:28 localhost kernel: alua: device handler registered
Nov 22 07:01:28 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 22 07:01:28 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 22 07:01:28 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 22 07:01:28 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 22 07:01:28 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 22 07:01:28 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 22 07:01:28 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 22 07:01:28 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 22 07:01:28 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 22 07:01:28 localhost kernel: hub 1-0:1.0: USB hub found
Nov 22 07:01:28 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 22 07:01:28 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 22 07:01:28 localhost kernel: usbserial: USB Serial support registered for generic
Nov 22 07:01:28 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 22 07:01:28 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 22 07:01:28 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 22 07:01:28 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 22 07:01:28 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 22 07:01:28 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 22 07:01:28 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 22 07:01:28 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-22T07:01:27 UTC (1763794887)
Nov 22 07:01:28 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 22 07:01:28 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 22 07:01:28 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 22 07:01:28 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 22 07:01:28 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 22 07:01:28 localhost kernel: usbcore: registered new interface driver usbhid
Nov 22 07:01:28 localhost kernel: usbhid: USB HID core driver
Nov 22 07:01:28 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 22 07:01:28 localhost kernel: Initializing XFRM netlink socket
Nov 22 07:01:28 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 22 07:01:28 localhost kernel: Segment Routing with IPv6
Nov 22 07:01:28 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 22 07:01:28 localhost kernel: mpls_gso: MPLS GSO support
Nov 22 07:01:28 localhost kernel: IPI shorthand broadcast: enabled
Nov 22 07:01:28 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 22 07:01:28 localhost kernel: AES CTR mode by8 optimization enabled
Nov 22 07:01:28 localhost kernel: sched_clock: Marking stable (1181001931, 160863575)->(1480688638, -138823132)
Nov 22 07:01:28 localhost kernel: registered taskstats version 1
Nov 22 07:01:28 localhost kernel: Loading compiled-in X.509 certificates
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 22 07:01:28 localhost kernel: Demotion targets for Node 0: null
Nov 22 07:01:28 localhost kernel: page_owner is disabled
Nov 22 07:01:28 localhost kernel: Key type .fscrypt registered
Nov 22 07:01:28 localhost kernel: Key type fscrypt-provisioning registered
Nov 22 07:01:28 localhost kernel: Key type big_key registered
Nov 22 07:01:28 localhost kernel: Key type encrypted registered
Nov 22 07:01:28 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 22 07:01:28 localhost kernel: Loading compiled-in module X.509 certificates
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 07:01:28 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 22 07:01:28 localhost kernel: ima: No architecture policies found
Nov 22 07:01:28 localhost kernel: evm: Initialising EVM extended attributes:
Nov 22 07:01:28 localhost kernel: evm: security.selinux
Nov 22 07:01:28 localhost kernel: evm: security.SMACK64 (disabled)
Nov 22 07:01:28 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 22 07:01:28 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 22 07:01:28 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 22 07:01:28 localhost kernel: evm: security.apparmor (disabled)
Nov 22 07:01:28 localhost kernel: evm: security.ima
Nov 22 07:01:28 localhost kernel: evm: security.capability
Nov 22 07:01:28 localhost kernel: evm: HMAC attrs: 0x1
Nov 22 07:01:28 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 22 07:01:28 localhost kernel: Running certificate verification RSA selftest
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 22 07:01:28 localhost kernel: Running certificate verification ECDSA selftest
Nov 22 07:01:28 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 22 07:01:28 localhost kernel: clk: Disabling unused clocks
Nov 22 07:01:28 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 22 07:01:28 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 22 07:01:28 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 22 07:01:28 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 22 07:01:28 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 22 07:01:28 localhost kernel: Run /init as init process
Nov 22 07:01:28 localhost kernel:   with arguments:
Nov 22 07:01:28 localhost kernel:     /init
Nov 22 07:01:28 localhost kernel:   with environment:
Nov 22 07:01:28 localhost kernel:     HOME=/
Nov 22 07:01:28 localhost kernel:     TERM=linux
Nov 22 07:01:28 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 22 07:01:28 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 07:01:28 localhost systemd[1]: Detected virtualization kvm.
Nov 22 07:01:28 localhost systemd[1]: Detected architecture x86-64.
Nov 22 07:01:28 localhost systemd[1]: Running in initrd.
Nov 22 07:01:28 localhost systemd[1]: No hostname configured, using default hostname.
Nov 22 07:01:28 localhost systemd[1]: Hostname set to <localhost>.
Nov 22 07:01:28 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 22 07:01:28 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 22 07:01:28 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 22 07:01:28 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 22 07:01:28 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 22 07:01:28 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 22 07:01:28 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 22 07:01:28 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 22 07:01:28 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 22 07:01:28 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 07:01:28 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 22 07:01:28 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 22 07:01:28 localhost systemd[1]: Reached target Local File Systems.
Nov 22 07:01:28 localhost systemd[1]: Reached target Path Units.
Nov 22 07:01:28 localhost systemd[1]: Reached target Slice Units.
Nov 22 07:01:28 localhost systemd[1]: Reached target Swaps.
Nov 22 07:01:28 localhost systemd[1]: Reached target Timer Units.
Nov 22 07:01:28 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 07:01:28 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 22 07:01:28 localhost systemd[1]: Listening on Journal Socket.
Nov 22 07:01:28 localhost systemd[1]: Listening on udev Control Socket.
Nov 22 07:01:28 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 22 07:01:28 localhost systemd[1]: Reached target Socket Units.
Nov 22 07:01:28 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 22 07:01:28 localhost systemd[1]: Starting Journal Service...
Nov 22 07:01:28 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 07:01:28 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 22 07:01:28 localhost systemd[1]: Starting Create System Users...
Nov 22 07:01:28 localhost systemd[1]: Starting Setup Virtual Console...
Nov 22 07:01:28 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 22 07:01:28 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 22 07:01:28 localhost systemd[1]: Finished Create System Users.
Nov 22 07:01:28 localhost systemd-journald[306]: Journal started
Nov 22 07:01:28 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/11d569d2d99e416a983ebf082353d9ca) is 8.0M, max 153.6M, 145.6M free.
Nov 22 07:01:28 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 22 07:01:28 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 22 07:01:28 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 22 07:01:28 localhost systemd[1]: Started Journal Service.
Nov 22 07:01:28 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 07:01:28 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 07:01:28 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 07:01:28 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 07:01:28 localhost systemd[1]: Finished Setup Virtual Console.
Nov 22 07:01:28 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 22 07:01:28 localhost systemd[1]: Starting dracut cmdline hook...
Nov 22 07:01:28 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Nov 22 07:01:28 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 07:01:28 localhost systemd[1]: Finished dracut cmdline hook.
Nov 22 07:01:28 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 22 07:01:28 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 22 07:01:28 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 22 07:01:28 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 22 07:01:28 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 22 07:01:28 localhost kernel: RPC: Registered udp transport module.
Nov 22 07:01:28 localhost kernel: RPC: Registered tcp transport module.
Nov 22 07:01:28 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 22 07:01:28 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 22 07:01:28 localhost rpc.statd[444]: Version 2.5.4 starting
Nov 22 07:01:28 localhost rpc.statd[444]: Initializing NSM state
Nov 22 07:01:28 localhost rpc.idmapd[449]: Setting log level to 0
Nov 22 07:01:28 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 22 07:01:28 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 07:01:28 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 07:01:28 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 07:01:28 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 22 07:01:28 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 22 07:01:28 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 22 07:01:29 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 22 07:01:29 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 07:01:29 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 22 07:01:29 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 07:01:29 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 07:01:29 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 22 07:01:29 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 07:01:29 localhost systemd[1]: Reached target Network.
Nov 22 07:01:29 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 07:01:29 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 22 07:01:29 localhost systemd[1]: Starting dracut initqueue hook...
Nov 22 07:01:29 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 22 07:01:29 localhost systemd[1]: Reached target System Initialization.
Nov 22 07:01:29 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 22 07:01:29 localhost kernel:  vda: vda1
Nov 22 07:01:29 localhost systemd[1]: Reached target Basic System.
Nov 22 07:01:29 localhost kernel: libata version 3.00 loaded.
Nov 22 07:01:29 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 22 07:01:29 localhost kernel: scsi host0: ata_piix
Nov 22 07:01:29 localhost kernel: scsi host1: ata_piix
Nov 22 07:01:29 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 22 07:01:29 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 22 07:01:29 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 07:01:29 localhost systemd[1]: Reached target Initrd Root Device.
Nov 22 07:01:29 localhost kernel: ata1: found unknown device (class 0)
Nov 22 07:01:29 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 22 07:01:29 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 22 07:01:29 localhost systemd-udevd[465]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 07:01:29 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 22 07:01:29 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 22 07:01:29 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 22 07:01:29 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 22 07:01:29 localhost systemd[1]: Finished dracut initqueue hook.
Nov 22 07:01:29 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 07:01:29 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 22 07:01:29 localhost systemd[1]: Reached target Remote File Systems.
Nov 22 07:01:29 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 22 07:01:29 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 22 07:01:29 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 22 07:01:29 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 22 07:01:29 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 07:01:29 localhost systemd[1]: Mounting /sysroot...
Nov 22 07:01:30 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 22 07:01:30 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 22 07:01:30 localhost kernel: XFS (vda1): Ending clean mount
Nov 22 07:01:30 localhost systemd[1]: Mounted /sysroot.
Nov 22 07:01:30 localhost systemd[1]: Reached target Initrd Root File System.
Nov 22 07:01:30 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 22 07:01:30 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 22 07:01:30 localhost systemd[1]: Reached target Initrd File Systems.
Nov 22 07:01:30 localhost systemd[1]: Reached target Initrd Default Target.
Nov 22 07:01:30 localhost systemd[1]: Starting dracut mount hook...
Nov 22 07:01:30 localhost systemd[1]: Finished dracut mount hook.
Nov 22 07:01:30 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 22 07:01:30 localhost rpc.idmapd[449]: exiting on signal 15
Nov 22 07:01:30 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 22 07:01:30 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 22 07:01:30 localhost systemd[1]: Stopped target Network.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Timer Units.
Nov 22 07:01:30 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 22 07:01:30 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Basic System.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Path Units.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Remote File Systems.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Slice Units.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Socket Units.
Nov 22 07:01:30 localhost systemd[1]: Stopped target System Initialization.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Local File Systems.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Swaps.
Nov 22 07:01:30 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped dracut mount hook.
Nov 22 07:01:30 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 22 07:01:30 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 22 07:01:30 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 22 07:01:30 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 22 07:01:30 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 22 07:01:30 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 22 07:01:30 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 22 07:01:30 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 22 07:01:30 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 22 07:01:30 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 22 07:01:30 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 22 07:01:30 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 22 07:01:30 localhost systemd[1]: systemd-udevd.service: Consumed 1.004s CPU time.
Nov 22 07:01:30 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Closed udev Control Socket.
Nov 22 07:01:30 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Closed udev Kernel Socket.
Nov 22 07:01:30 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 22 07:01:30 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 22 07:01:30 localhost systemd[1]: Starting Cleanup udev Database...
Nov 22 07:01:30 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 22 07:01:30 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 22 07:01:30 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Stopped Create System Users.
Nov 22 07:01:30 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 22 07:01:30 localhost systemd[1]: Finished Cleanup udev Database.
Nov 22 07:01:30 localhost systemd[1]: Reached target Switch Root.
Nov 22 07:01:30 localhost systemd[1]: Starting Switch Root...
Nov 22 07:01:30 localhost systemd[1]: Switching root.
Nov 22 07:01:30 localhost systemd-journald[306]: Journal stopped
Nov 22 07:01:32 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Nov 22 07:01:32 localhost kernel: audit: type=1404 audit(1763794891.061:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 22 07:01:32 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 07:01:32 localhost kernel: SELinux:  policy capability open_perms=1
Nov 22 07:01:32 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 07:01:32 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 22 07:01:32 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 07:01:32 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 07:01:32 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 07:01:32 localhost kernel: audit: type=1403 audit(1763794891.290:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 22 07:01:32 localhost systemd[1]: Successfully loaded SELinux policy in 234.847ms.
Nov 22 07:01:32 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.577ms.
Nov 22 07:01:32 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 07:01:32 localhost systemd[1]: Detected virtualization kvm.
Nov 22 07:01:32 localhost systemd[1]: Detected architecture x86-64.
Nov 22 07:01:32 localhost systemd-rc-local-generator[643]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:01:32 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 22 07:01:32 localhost systemd[1]: Stopped Switch Root.
Nov 22 07:01:32 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 22 07:01:32 localhost systemd[1]: Created slice Slice /system/getty.
Nov 22 07:01:32 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 22 07:01:32 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 22 07:01:32 localhost systemd[1]: Created slice User and Session Slice.
Nov 22 07:01:32 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 07:01:32 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 22 07:01:32 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 22 07:01:32 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 22 07:01:32 localhost systemd[1]: Stopped target Switch Root.
Nov 22 07:01:32 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 22 07:01:32 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 22 07:01:32 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 22 07:01:32 localhost systemd[1]: Reached target Path Units.
Nov 22 07:01:32 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 22 07:01:32 localhost systemd[1]: Reached target Slice Units.
Nov 22 07:01:32 localhost systemd[1]: Reached target Swaps.
Nov 22 07:01:32 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 22 07:01:32 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 22 07:01:32 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 22 07:01:32 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 22 07:01:32 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 22 07:01:32 localhost systemd[1]: Listening on udev Control Socket.
Nov 22 07:01:32 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 22 07:01:32 localhost systemd[1]: Mounting Huge Pages File System...
Nov 22 07:01:32 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 22 07:01:32 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 22 07:01:32 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 22 07:01:32 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 07:01:32 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 22 07:01:32 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 07:01:32 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 22 07:01:32 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 22 07:01:32 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 22 07:01:32 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 22 07:01:32 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 22 07:01:32 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 22 07:01:32 localhost systemd[1]: Stopped Journal Service.
Nov 22 07:01:32 localhost kernel: fuse: init (API version 7.37)
Nov 22 07:01:32 localhost systemd[1]: Starting Journal Service...
Nov 22 07:01:32 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 07:01:32 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 22 07:01:32 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 07:01:32 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 22 07:01:32 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 22 07:01:32 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 22 07:01:32 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 22 07:01:32 localhost systemd-journald[684]: Journal started
Nov 22 07:01:32 localhost systemd-journald[684]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 07:01:32 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 22 07:01:32 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 22 07:01:32 localhost systemd[1]: Mounted Huge Pages File System.
Nov 22 07:01:32 localhost systemd[1]: Started Journal Service.
Nov 22 07:01:32 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 22 07:01:32 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 22 07:01:32 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 22 07:01:32 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 22 07:01:32 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 07:01:32 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 07:01:32 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 22 07:01:32 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 22 07:01:32 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 22 07:01:32 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 22 07:01:32 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 22 07:01:32 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 22 07:01:32 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 22 07:01:32 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 22 07:01:32 localhost kernel: ACPI: bus type drm_connector registered
Nov 22 07:01:32 localhost systemd[1]: Mounting FUSE Control File System...
Nov 22 07:01:32 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 07:01:32 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 22 07:01:32 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 22 07:01:32 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 22 07:01:32 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 22 07:01:32 localhost systemd[1]: Starting Create System Users...
Nov 22 07:01:32 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 22 07:01:32 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 22 07:01:32 localhost systemd-journald[684]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 07:01:32 localhost systemd-journald[684]: Received client request to flush runtime journal.
Nov 22 07:01:32 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 22 07:01:32 localhost systemd[1]: Mounted FUSE Control File System.
Nov 22 07:01:32 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 22 07:01:32 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 22 07:01:32 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 07:01:32 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 22 07:01:32 localhost systemd[1]: Finished Create System Users.
Nov 22 07:01:32 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 07:01:32 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 07:01:32 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 22 07:01:32 localhost systemd[1]: Reached target Local File Systems.
Nov 22 07:01:32 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 22 07:01:32 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 22 07:01:32 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 22 07:01:32 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 22 07:01:32 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 22 07:01:32 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 22 07:01:32 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 07:01:32 localhost bootctl[701]: Couldn't find EFI system partition, skipping.
Nov 22 07:01:32 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 22 07:01:33 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 07:01:33 localhost systemd[1]: Starting Security Auditing Service...
Nov 22 07:01:33 localhost systemd[1]: Starting RPC Bind...
Nov 22 07:01:33 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 22 07:01:33 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 22 07:01:33 localhost systemd[1]: Started RPC Bind.
Nov 22 07:01:33 localhost auditd[707]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 22 07:01:33 localhost auditd[707]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 22 07:01:33 localhost augenrules[712]: /sbin/augenrules: No change
Nov 22 07:01:33 localhost augenrules[727]: No rules
Nov 22 07:01:33 localhost augenrules[727]: enabled 1
Nov 22 07:01:33 localhost augenrules[727]: failure 1
Nov 22 07:01:33 localhost augenrules[727]: pid 707
Nov 22 07:01:33 localhost augenrules[727]: rate_limit 0
Nov 22 07:01:33 localhost augenrules[727]: backlog_limit 8192
Nov 22 07:01:33 localhost augenrules[727]: lost 0
Nov 22 07:01:33 localhost augenrules[727]: backlog 1
Nov 22 07:01:33 localhost augenrules[727]: backlog_wait_time 60000
Nov 22 07:01:33 localhost augenrules[727]: backlog_wait_time_actual 0
Nov 22 07:01:33 localhost augenrules[727]: enabled 1
Nov 22 07:01:33 localhost augenrules[727]: failure 1
Nov 22 07:01:33 localhost augenrules[727]: pid 707
Nov 22 07:01:33 localhost augenrules[727]: rate_limit 0
Nov 22 07:01:33 localhost augenrules[727]: backlog_limit 8192
Nov 22 07:01:33 localhost augenrules[727]: lost 0
Nov 22 07:01:33 localhost augenrules[727]: backlog 3
Nov 22 07:01:33 localhost augenrules[727]: backlog_wait_time 60000
Nov 22 07:01:33 localhost augenrules[727]: backlog_wait_time_actual 0
Nov 22 07:01:33 localhost augenrules[727]: enabled 1
Nov 22 07:01:33 localhost augenrules[727]: failure 1
Nov 22 07:01:33 localhost augenrules[727]: pid 707
Nov 22 07:01:33 localhost augenrules[727]: rate_limit 0
Nov 22 07:01:33 localhost augenrules[727]: backlog_limit 8192
Nov 22 07:01:33 localhost augenrules[727]: lost 0
Nov 22 07:01:33 localhost augenrules[727]: backlog 2
Nov 22 07:01:33 localhost augenrules[727]: backlog_wait_time 60000
Nov 22 07:01:33 localhost augenrules[727]: backlog_wait_time_actual 0
Nov 22 07:01:33 localhost systemd[1]: Started Security Auditing Service.
Nov 22 07:01:33 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 22 07:01:33 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 22 07:01:33 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 22 07:01:33 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 07:01:33 localhost systemd-udevd[735]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 07:01:33 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 07:01:33 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 07:01:33 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 22 07:01:33 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 07:01:33 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 07:01:33 localhost systemd-udevd[749]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 07:01:33 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 22 07:01:33 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 22 07:01:33 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 22 07:01:33 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 22 07:01:34 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 22 07:01:34 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 22 07:01:34 localhost kernel: Console: switching to colour dummy device 80x25
Nov 22 07:01:34 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 22 07:01:34 localhost kernel: [drm] features: -context_init
Nov 22 07:01:34 localhost kernel: [drm] number of scanouts: 1
Nov 22 07:01:34 localhost kernel: [drm] number of cap sets: 0
Nov 22 07:01:34 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 22 07:01:34 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 22 07:01:34 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 22 07:01:34 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 22 07:01:34 localhost kernel: kvm_amd: TSC scaling supported
Nov 22 07:01:34 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 22 07:01:34 localhost kernel: kvm_amd: Nested Paging enabled
Nov 22 07:01:34 localhost kernel: kvm_amd: LBR virtualization supported
Nov 22 07:01:34 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 22 07:01:34 localhost systemd[1]: Starting Update is Completed...
Nov 22 07:01:34 localhost systemd[1]: Finished Update is Completed.
Nov 22 07:01:34 localhost systemd[1]: Reached target System Initialization.
Nov 22 07:01:34 localhost systemd[1]: Started dnf makecache --timer.
Nov 22 07:01:34 localhost systemd[1]: Started Daily rotation of log files.
Nov 22 07:01:34 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 22 07:01:34 localhost systemd[1]: Reached target Timer Units.
Nov 22 07:01:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 07:01:34 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 22 07:01:34 localhost systemd[1]: Reached target Socket Units.
Nov 22 07:01:34 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 22 07:01:34 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 07:01:34 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 22 07:01:34 localhost systemd[1]: Reached target Basic System.
Nov 22 07:01:35 localhost dbus-broker-lau[816]: Ready
Nov 22 07:01:35 localhost systemd[1]: Starting NTP client/server...
Nov 22 07:01:35 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 22 07:01:35 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 22 07:01:35 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 22 07:01:35 localhost systemd[1]: Started irqbalance daemon.
Nov 22 07:01:35 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 22 07:01:35 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 07:01:35 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 07:01:35 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 07:01:35 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 22 07:01:35 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 22 07:01:35 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 22 07:01:35 localhost systemd[1]: Starting User Login Management...
Nov 22 07:01:35 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 22 07:01:35 localhost chronyd[835]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 07:01:35 localhost chronyd[835]: Loaded 0 symmetric keys
Nov 22 07:01:35 localhost chronyd[835]: Using right/UTC timezone to obtain leap second data
Nov 22 07:01:35 localhost chronyd[835]: Loaded seccomp filter (level 2)
Nov 22 07:01:35 localhost systemd[1]: Started NTP client/server.
Nov 22 07:01:35 localhost systemd-logind[826]: New seat seat0.
Nov 22 07:01:35 localhost systemd-logind[826]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 07:01:35 localhost systemd-logind[826]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 07:01:35 localhost systemd[1]: Started User Login Management.
Nov 22 07:01:35 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 22 07:01:35 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 22 07:01:35 localhost iptables.init[821]: iptables: Applying firewall rules: [  OK  ]
Nov 22 07:01:35 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 22 07:01:36 localhost cloud-init[844]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 22 Nov 2025 07:01:36 +0000. Up 9.88 seconds.
Nov 22 07:01:36 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 22 07:01:36 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 22 07:01:36 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpl68qixeq.mount: Deactivated successfully.
Nov 22 07:01:36 localhost systemd[1]: Starting Hostname Service...
Nov 22 07:01:36 localhost systemd[1]: Started Hostname Service.
Nov 22 07:01:36 np0005531992.novalocal systemd-hostnamed[858]: Hostname set to <np0005531992.novalocal> (static)
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Reached target Preparation for Network.
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Starting Network Manager...
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8234] NetworkManager (version 1.54.1-1.el9) is starting... (boot:a7489e2e-a622-4254-9a7e-02eae9fa3dfd)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8240] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8432] manager[0x55cb9c864080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8483] hostname: hostname: using hostnamed
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8483] hostname: static hostname changed from (none) to "np0005531992.novalocal"
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8487] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8586] manager[0x55cb9c864080]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8587] manager[0x55cb9c864080]: rfkill: WWAN hardware radio set enabled
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8735] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8735] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8736] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8737] manager: Networking is enabled by state file
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8738] settings: Loaded settings plugin: keyfile (internal)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8783] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8812] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8840] dhcp: init: Using DHCP client 'internal'
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8843] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8859] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8878] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8887] device (lo): Activation: starting connection 'lo' (d01cbcdc-cc87-4c04-b365-895d2218de25)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8896] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8900] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8931] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8950] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Started Network Manager.
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8956] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8964] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8968] device (eth0): carrier: link connected
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8974] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Reached target Network.
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8983] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8991] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8996] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.8997] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9002] manager: NetworkManager state is now CONNECTING
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9005] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9013] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9017] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9089] dhcp4 (eth0): state changed new lease, address=38.129.56.85
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9097] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9115] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9131] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9132] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9138] device (lo): Activation: successful, device activated.
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9159] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9161] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9166] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9170] device (eth0): Activation: successful, device activated.
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9175] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 07:01:36 np0005531992.novalocal NetworkManager[862]: <info>  [1763794896.9177] manager: startup complete
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Reached target NFS client services.
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Reached target Remote File Systems.
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 22 07:01:36 np0005531992.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 22 Nov 2025 07:01:37 +0000. Up 10.91 seconds.
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |  eth0  | True |         38.129.56.85         | 255.255.255.0 | global | fa:16:3e:21:85:15 |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |  eth0  | True | fe80::f816:3eff:fe21:8515/64 |       .       |  link  | fa:16:3e:21:85:15 |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 22 07:01:37 np0005531992.novalocal cloud-init[929]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 07:01:38 np0005531992.novalocal useradd[996]: new group: name=cloud-user, GID=1001
Nov 22 07:01:38 np0005531992.novalocal useradd[996]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 22 07:01:38 np0005531992.novalocal useradd[996]: add 'cloud-user' to group 'adm'
Nov 22 07:01:38 np0005531992.novalocal useradd[996]: add 'cloud-user' to group 'systemd-journal'
Nov 22 07:01:38 np0005531992.novalocal useradd[996]: add 'cloud-user' to shadow group 'adm'
Nov 22 07:01:38 np0005531992.novalocal useradd[996]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Generating public/private rsa key pair.
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: The key fingerprint is:
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: SHA256:AhK384w10T6WBbvATlUeMXSb1SuT1WxwVFwR4sXlJT8 root@np0005531992.novalocal
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: The key's randomart image is:
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: +---[RSA 3072]----+
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |  . . ..o+*...=X#|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |   o o o.o.+.+oBB|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |  . + *..o. o.oE+|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |   . X o=.   + ..|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |    . =.S.    o  |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |       .         |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |                 |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |                 |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |                 |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: +----[SHA256]-----+
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Generating public/private ecdsa key pair.
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: The key fingerprint is:
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: SHA256:2GLBPI+tT9ISPM71UXnVVgYmrmsyetTVHSIrN6nZAmk root@np0005531992.novalocal
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: The key's randomart image is:
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: +---[ECDSA 256]---+
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |            . o.*|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |     o     o = +o|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |      = .   B.oo.|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |     . E . B... .|
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |      X S.O..    |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |     + B.=.+     |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |      *.= =      |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |       *.+       |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |      ...        |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: +----[SHA256]-----+
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Generating public/private ed25519 key pair.
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: The key fingerprint is:
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: SHA256:+knUnOy3AqLayUDZui5PqbLQDfT2363Eu8hCM7zwLWc root@np0005531992.novalocal
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: The key's randomart image is:
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: +--[ED25519 256]--+
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |                 |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |                 |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |  .              |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: | . +     + .     |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |  + +.  S =      |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: | o *..*o.o       |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |. * .=o*..+ .    |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |+o =..==E=.+ .   |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: |+==.+  =* =+o    |
Nov 22 07:01:38 np0005531992.novalocal cloud-init[929]: +----[SHA256]-----+
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Reached target Network is Online.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting System Logging Service...
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting Permit User Sessions...
Nov 22 07:01:39 np0005531992.novalocal sm-notify[1012]: Version 2.5.4 starting
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 22 07:01:39 np0005531992.novalocal sshd[1014]: Server listening on 0.0.0.0 port 22.
Nov 22 07:01:39 np0005531992.novalocal sshd[1014]: Server listening on :: port 22.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Finished Permit User Sessions.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Started Command Scheduler.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Started Getty on tty1.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Reached target Login Prompts.
Nov 22 07:01:39 np0005531992.novalocal crond[1017]: (CRON) STARTUP (1.5.7)
Nov 22 07:01:39 np0005531992.novalocal crond[1017]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 22 07:01:39 np0005531992.novalocal crond[1017]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 60% if used.)
Nov 22 07:01:39 np0005531992.novalocal crond[1017]: (CRON) INFO (running with inotify support)
Nov 22 07:01:39 np0005531992.novalocal rsyslogd[1013]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1013" x-info="https://www.rsyslog.com"] start
Nov 22 07:01:39 np0005531992.novalocal rsyslogd[1013]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Started System Logging Service.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Reached target Multi-User System.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 22 07:01:39 np0005531992.novalocal rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 07:01:39 np0005531992.novalocal kdumpctl[1022]: kdump: No kdump initial ramdisk found.
Nov 22 07:01:39 np0005531992.novalocal kdumpctl[1022]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1140]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 22 Nov 2025 07:01:39 +0000. Up 13.02 seconds.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 22 07:01:39 np0005531992.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 22 07:01:39 np0005531992.novalocal dracut[1273]: dracut-057-102.git20250818.el9
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1291]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 22 Nov 2025 07:01:39 +0000. Up 13.44 seconds.
Nov 22 07:01:39 np0005531992.novalocal dracut[1275]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1309]: #############################################################
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1313]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1318]: 256 SHA256:2GLBPI+tT9ISPM71UXnVVgYmrmsyetTVHSIrN6nZAmk root@np0005531992.novalocal (ECDSA)
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1325]: 256 SHA256:+knUnOy3AqLayUDZui5PqbLQDfT2363Eu8hCM7zwLWc root@np0005531992.novalocal (ED25519)
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1330]: 3072 SHA256:AhK384w10T6WBbvATlUeMXSb1SuT1WxwVFwR4sXlJT8 root@np0005531992.novalocal (RSA)
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1333]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 22 07:01:39 np0005531992.novalocal cloud-init[1335]: #############################################################
Nov 22 07:01:40 np0005531992.novalocal cloud-init[1291]: Cloud-init v. 24.4-7.el9 finished at Sat, 22 Nov 2025 07:01:39 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.64 seconds
Nov 22 07:01:40 np0005531992.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 22 07:01:40 np0005531992.novalocal systemd[1]: Reached target Cloud-init target.
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1470]: Connection reset by 38.102.83.114 port 37884 [preauth]
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1490]: Unable to negotiate with 38.102.83.114 port 37898: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1513]: Unable to negotiate with 38.102.83.114 port 37912: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1524]: Unable to negotiate with 38.102.83.114 port 37920: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1503]: Connection closed by 38.102.83.114 port 37900 [preauth]
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1582]: Unable to negotiate with 38.102.83.114 port 37940: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1591]: Unable to negotiate with 38.102.83.114 port 37950: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1532]: Connection closed by 38.102.83.114 port 37924 [preauth]
Nov 22 07:01:40 np0005531992.novalocal sshd-session[1568]: Connection closed by 38.102.83.114 port 37930 [preauth]
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 22 07:01:40 np0005531992.novalocal dracut[1275]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: memstrack is not available
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: memstrack is not available
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: *** Including module: systemd ***
Nov 22 07:01:41 np0005531992.novalocal dracut[1275]: *** Including module: fips ***
Nov 22 07:01:41 np0005531992.novalocal chronyd[835]: Selected source 216.128.178.20 (2.centos.pool.ntp.org)
Nov 22 07:01:41 np0005531992.novalocal chronyd[835]: System clock TAI offset set to 37 seconds
Nov 22 07:01:42 np0005531992.novalocal dracut[1275]: *** Including module: systemd-initrd ***
Nov 22 07:01:42 np0005531992.novalocal dracut[1275]: *** Including module: i18n ***
Nov 22 07:01:42 np0005531992.novalocal dracut[1275]: *** Including module: drm ***
Nov 22 07:01:42 np0005531992.novalocal dracut[1275]: *** Including module: prefixdevname ***
Nov 22 07:01:42 np0005531992.novalocal dracut[1275]: *** Including module: kernel-modules ***
Nov 22 07:01:42 np0005531992.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: *** Including module: kernel-modules-extra ***
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 22 07:01:43 np0005531992.novalocal chronyd[835]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: *** Including module: qemu ***
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: *** Including module: fstab-sys ***
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: *** Including module: rootfs-block ***
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: *** Including module: terminfo ***
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: *** Including module: udev-rules ***
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: Skipping udev rule: 91-permissions.rules
Nov 22 07:01:43 np0005531992.novalocal dracut[1275]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]: *** Including module: virtiofs ***
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]: *** Including module: dracut-systemd ***
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]: *** Including module: usrmount ***
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]: *** Including module: base ***
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]: *** Including module: fs-lib ***
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]: *** Including module: kdumpbase ***
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:   microcode_ctl module: mangling fw_dir
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel" is ignored
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 22 07:01:44 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]: *** Including module: openssl ***
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]: *** Including module: shutdown ***
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: IRQ 25 affinity is now unmanaged
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: IRQ 31 affinity is now unmanaged
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: IRQ 28 affinity is now unmanaged
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: IRQ 32 affinity is now unmanaged
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: IRQ 30 affinity is now unmanaged
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 22 07:01:45 np0005531992.novalocal irqbalance[822]: IRQ 29 affinity is now unmanaged
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]: *** Including module: squash ***
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]: *** Including modules done ***
Nov 22 07:01:45 np0005531992.novalocal dracut[1275]: *** Installing kernel module dependencies ***
Nov 22 07:01:46 np0005531992.novalocal dracut[1275]: *** Installing kernel module dependencies done ***
Nov 22 07:01:46 np0005531992.novalocal dracut[1275]: *** Resolving executable dependencies ***
Nov 22 07:01:47 np0005531992.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 07:01:48 np0005531992.novalocal dracut[1275]: *** Resolving executable dependencies done ***
Nov 22 07:01:48 np0005531992.novalocal dracut[1275]: *** Generating early-microcode cpio image ***
Nov 22 07:01:48 np0005531992.novalocal dracut[1275]: *** Store current command line parameters ***
Nov 22 07:01:48 np0005531992.novalocal dracut[1275]: Stored kernel commandline:
Nov 22 07:01:48 np0005531992.novalocal dracut[1275]: No dracut internal kernel commandline stored in the initramfs
Nov 22 07:01:48 np0005531992.novalocal dracut[1275]: *** Install squash loader ***
Nov 22 07:01:49 np0005531992.novalocal dracut[1275]: *** Squashing the files inside the initramfs ***
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: *** Squashing the files inside the initramfs done ***
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: *** Hardlinking files ***
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: Mode:           real
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: Files:          50
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: Linked:         0 files
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: Compared:       0 xattrs
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: Compared:       0 files
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: Saved:          0 B
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: Duration:       0.000582 seconds
Nov 22 07:01:50 np0005531992.novalocal dracut[1275]: *** Hardlinking files done ***
Nov 22 07:01:51 np0005531992.novalocal dracut[1275]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 22 07:01:51 np0005531992.novalocal kdumpctl[1022]: kdump: kexec: loaded kdump kernel
Nov 22 07:01:51 np0005531992.novalocal kdumpctl[1022]: kdump: Starting kdump: [OK]
Nov 22 07:01:51 np0005531992.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 22 07:01:51 np0005531992.novalocal systemd[1]: Startup finished in 1.506s (kernel) + 3.176s (initrd) + 20.765s (userspace) = 25.448s.
Nov 22 07:02:06 np0005531992.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 07:05:17 np0005531992.novalocal sshd-session[4305]: Accepted publickey for zuul from 38.102.83.114 port 35356 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 22 07:05:17 np0005531992.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 22 07:05:17 np0005531992.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 22 07:05:17 np0005531992.novalocal systemd-logind[826]: New session 1 of user zuul.
Nov 22 07:05:18 np0005531992.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 22 07:05:18 np0005531992.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Queued start job for default target Main User Target.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Created slice User Application Slice.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Reached target Paths.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Reached target Timers.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Starting D-Bus User Message Bus Socket...
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Starting Create User's Volatile Files and Directories...
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Finished Create User's Volatile Files and Directories.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Listening on D-Bus User Message Bus Socket.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Reached target Sockets.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Reached target Basic System.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Reached target Main User Target.
Nov 22 07:05:18 np0005531992.novalocal systemd[4309]: Startup finished in 233ms.
Nov 22 07:05:18 np0005531992.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 22 07:05:18 np0005531992.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 22 07:05:18 np0005531992.novalocal sshd-session[4305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:05:19 np0005531992.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:05:31 np0005531992.novalocal python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:05:38 np0005531992.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:05:39 np0005531992.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 22 07:05:41 np0005531992.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD1kjEUdV8xmh1huACovfsdqbyhCG+0xtsDpA8P/NyEzI1oyrWwJsZZo92OrgOuzPubJMEZ29PJtUtU02ImeiPkgt59gZvn+KfEn8ftKybiDhc+V9MIN0QnJiAUUGSBa9TTrbx9rTP5e50pUsArzr0MyvLR7aJitdFGHY2OatV/DNCooLkvUXhpNIskhJSIx87zxz0ctGrakwNWsNhKHa8FxKGcyo0DepuKt6bQpRsI/cg7dQsTgRzWRaF0qhpw7wCLhNc87ku6rRJF3e0ecj4moGZfeb5HfB6e86NtiD3dZXZexf+WAJYr+XKOMBAGZ51baAqxzg/IpXGpaORR6ww2CYLDArj1N0tLuK5VdyLCYsyUkxV5c+CcPXO4XeNAfqR6w7V4W8Hummbm4zhrUWqO9R0UOk8W1kMcKft5MkrZ0iI8aZQEA2AmYjv2PDp01XhtigzqnwnTCmMzU3apwxdAkdYxIvuq5BXaSH79BcrK4kpGt51Q27CUz6251fSM4E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:41 np0005531992.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:42 np0005531992.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:05:42 np0005531992.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763795141.9343903-207-17049742676876/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=1ccdcb80eff14f5cbd1a48a2fa9d4612_id_rsa follow=False checksum=d617e3a0df0ea0e6726dda0cd9fbe463eb992ddb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:43 np0005531992.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:05:43 np0005531992.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763795143.0267024-240-106479849438838/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=1ccdcb80eff14f5cbd1a48a2fa9d4612_id_rsa.pub follow=False checksum=0a5a6348d148ccdd5d890ccf89bace1c0877814e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:45 np0005531992.novalocal python3[4979]: ansible-ping Invoked with data=pong
Nov 22 07:05:46 np0005531992.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:05:48 np0005531992.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 22 07:05:49 np0005531992.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:49 np0005531992.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:50 np0005531992.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:50 np0005531992.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:50 np0005531992.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:50 np0005531992.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:52 np0005531992.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtmrcyqbhbseyrksjglwesechifbqqht ; /usr/bin/python3'
Nov 22 07:05:52 np0005531992.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:05:52 np0005531992.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:52 np0005531992.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Nov 22 07:05:53 np0005531992.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elqumjmxfbvynvgukfxzszllroniutsj ; /usr/bin/python3'
Nov 22 07:05:53 np0005531992.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:05:53 np0005531992.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:05:53 np0005531992.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Nov 22 07:05:53 np0005531992.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgfkgfmmdjrivorpfldodusjclcgqdh ; /usr/bin/python3'
Nov 22 07:05:53 np0005531992.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:05:53 np0005531992.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763795152.7305372-21-49508056824590/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:05:53 np0005531992.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Nov 22 07:05:54 np0005531992.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:54 np0005531992.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:54 np0005531992.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:55 np0005531992.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:55 np0005531992.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:55 np0005531992.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:56 np0005531992.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:56 np0005531992.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:56 np0005531992.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:56 np0005531992.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:57 np0005531992.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:57 np0005531992.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:57 np0005531992.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:58 np0005531992.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:58 np0005531992.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:58 np0005531992.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:59 np0005531992.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:59 np0005531992.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:05:59 np0005531992.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:00 np0005531992.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:00 np0005531992.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:00 np0005531992.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:01 np0005531992.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:01 np0005531992.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:01 np0005531992.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:01 np0005531992.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:06:04 np0005531992.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmqanmwnresojyjmvswxkozimwpxubyc ; /usr/bin/python3'
Nov 22 07:06:04 np0005531992.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:04 np0005531992.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 07:06:05 np0005531992.novalocal systemd[1]: Starting Time & Date Service...
Nov 22 07:06:05 np0005531992.novalocal systemd[1]: Started Time & Date Service.
Nov 22 07:06:05 np0005531992.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Nov 22 07:06:05 np0005531992.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:05 np0005531992.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldaigtvtomjqqrekvhizkjjwcrkdoldw ; /usr/bin/python3'
Nov 22 07:06:05 np0005531992.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:05 np0005531992.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:06:05 np0005531992.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:06 np0005531992.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:06:06 np0005531992.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763795165.8009202-153-252041074496965/source _original_basename=tmpaizq2827 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:06:06 np0005531992.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:06:07 np0005531992.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763795166.6260169-183-111760621677393/source _original_basename=tmp8gajna5d follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:06:07 np0005531992.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uimwbkxfbkzndytsrshgndjrsknutork ; /usr/bin/python3'
Nov 22 07:06:07 np0005531992.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:07 np0005531992.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:06:07 np0005531992.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:08 np0005531992.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldzniqlesqczzkxzpeuaswxyoxkgfafh ; /usr/bin/python3'
Nov 22 07:06:08 np0005531992.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:08 np0005531992.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763795167.6446145-231-111928921268976/source _original_basename=tmpfnsb1pqo follow=False checksum=605526c5ba154ace4d6375847ad488a4e30d7367 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:06:08 np0005531992.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:08 np0005531992.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:06:09 np0005531992.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:06:09 np0005531992.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqpxfbgopysiirhvjopgqstwivsofurx ; /usr/bin/python3'
Nov 22 07:06:09 np0005531992.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:09 np0005531992.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:06:09 np0005531992.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:09 np0005531992.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdgokuhldxcqjadfbatyhfvwgimrlyes ; /usr/bin/python3'
Nov 22 07:06:09 np0005531992.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:10 np0005531992.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763795169.3720305-273-160663060496326/source _original_basename=tmpos2dir52 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:06:10 np0005531992.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:10 np0005531992.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkvxilybfcwlilawkammnrwurcfqmyev ; /usr/bin/python3'
Nov 22 07:06:10 np0005531992.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:10 np0005531992.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-8727-be60-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:06:10 np0005531992.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:11 np0005531992.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-8727-be60-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 22 07:06:12 np0005531992.novalocal python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:06:15 np0005531992.novalocal irqbalance[822]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 22 07:06:15 np0005531992.novalocal irqbalance[822]: IRQ 27 affinity is now unmanaged
Nov 22 07:06:33 np0005531992.novalocal sudo[6946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmbzrckugknwinhqmcsufmewpzxafrvp ; /usr/bin/python3'
Nov 22 07:06:33 np0005531992.novalocal sudo[6946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:06:33 np0005531992.novalocal python3[6948]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:06:33 np0005531992.novalocal sudo[6946]: pam_unix(sudo:session): session closed for user root
Nov 22 07:06:35 np0005531992.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 22 07:07:13 np0005531992.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 22 07:07:13 np0005531992.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.8985] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 07:07:13 np0005531992.novalocal systemd-udevd[6954]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9137] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9161] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9165] device (eth1): carrier: link connected
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9167] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9172] policy: auto-activating connection 'Wired connection 1' (e2c878f8-511d-38c1-9152-001095563e31)
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9175] device (eth1): Activation: starting connection 'Wired connection 1' (e2c878f8-511d-38c1-9152-001095563e31)
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9176] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9180] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9183] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 07:07:13 np0005531992.novalocal NetworkManager[862]: <info>  [1763795233.9186] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 07:07:14 np0005531992.novalocal python3[6980]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-9052-2e86-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:07:21 np0005531992.novalocal sudo[7058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bepvtpeyzfyzewulxroehwhxdyvgkekb ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 07:07:21 np0005531992.novalocal sudo[7058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:07:21 np0005531992.novalocal python3[7060]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:07:21 np0005531992.novalocal sudo[7058]: pam_unix(sudo:session): session closed for user root
Nov 22 07:07:22 np0005531992.novalocal sudo[7131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bspyssccwvoylfgpzetnvdevrzwvzjgg ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 07:07:22 np0005531992.novalocal sudo[7131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:07:22 np0005531992.novalocal python3[7133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763795241.530418-102-1668322560160/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2b77e6faadea2e2fc4d8ccfd205bce1a6722e24c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:07:22 np0005531992.novalocal sudo[7131]: pam_unix(sudo:session): session closed for user root
Nov 22 07:07:22 np0005531992.novalocal sudo[7181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bliicpemesmqvxtletaazhzjoullylki ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 07:07:22 np0005531992.novalocal sudo[7181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:07:22 np0005531992.novalocal python3[7183]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Stopping Network Manager...
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.0211] caught SIGTERM, shutting down normally.
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.0220] dhcp4 (eth0): canceled DHCP transaction
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.0220] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.0221] dhcp4 (eth0): state changed no lease
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.0223] manager: NetworkManager state is now CONNECTING
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.0345] dhcp4 (eth1): canceled DHCP transaction
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.0346] dhcp4 (eth1): state changed no lease
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[862]: <info>  [1763795243.3257] exiting (success)
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Stopped Network Manager.
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: NetworkManager.service: Consumed 2.885s CPU time, 9.9M memory peak.
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Starting Network Manager...
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.3823] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:a7489e2e-a622-4254-9a7e-02eae9fa3dfd)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.3824] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.3873] manager[0x55e9e2bb8070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Starting Hostname Service...
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Started Hostname Service.
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4799] hostname: hostname: using hostnamed
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4802] hostname: static hostname changed from (none) to "np0005531992.novalocal"
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4806] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4810] manager[0x55e9e2bb8070]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4810] manager[0x55e9e2bb8070]: rfkill: WWAN hardware radio set enabled
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4834] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4834] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4835] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4835] manager: Networking is enabled by state file
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4837] settings: Loaded settings plugin: keyfile (internal)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4841] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4865] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4873] dhcp: init: Using DHCP client 'internal'
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4876] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4880] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4885] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4891] device (lo): Activation: starting connection 'lo' (d01cbcdc-cc87-4c04-b365-895d2218de25)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4896] device (eth0): carrier: link connected
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4900] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4904] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4904] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4910] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4916] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4922] device (eth1): carrier: link connected
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4925] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4930] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e2c878f8-511d-38c1-9152-001095563e31) (indicated)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4930] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4935] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4942] device (eth1): Activation: starting connection 'Wired connection 1' (e2c878f8-511d-38c1-9152-001095563e31)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4948] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Started Network Manager.
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4952] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4954] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4956] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4958] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4960] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4961] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4963] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4976] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4991] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.4998] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.5016] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.5019] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.5038] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.5044] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.5051] device (lo): Activation: successful, device activated.
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.5059] dhcp4 (eth0): state changed new lease, address=38.129.56.85
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.5066] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 07:07:23 np0005531992.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 22 07:07:23 np0005531992.novalocal sudo[7181]: pam_unix(sudo:session): session closed for user root
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.7225] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.7285] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.7287] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.7289] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.7292] device (eth0): Activation: successful, device activated.
Nov 22 07:07:23 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795243.7296] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 07:07:23 np0005531992.novalocal python3[7249]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-9052-2e86-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:07:33 np0005531992.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 07:07:51 np0005531992.novalocal systemd[4309]: Starting Mark boot as successful...
Nov 22 07:07:51 np0005531992.novalocal systemd[4309]: Finished Mark boot as successful.
Nov 22 07:07:53 np0005531992.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.3545] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 07:08:08 np0005531992.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 07:08:08 np0005531992.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.3860] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.3863] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.3871] device (eth1): Activation: successful, device activated.
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.3878] manager: startup complete
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.3880] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <warn>  [1763795288.3884] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.3891] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4115] dhcp4 (eth1): canceled DHCP transaction
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4115] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4116] dhcp4 (eth1): state changed no lease
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4139] policy: auto-activating connection 'ci-private-network' (ba681640-7f4a-58d5-a224-a4a4f9cf13bc)
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4146] device (eth1): Activation: starting connection 'ci-private-network' (ba681640-7f4a-58d5-a224-a4a4f9cf13bc)
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4147] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4151] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4160] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.4173] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.5991] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.5994] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 07:08:08 np0005531992.novalocal NetworkManager[7200]: <info>  [1763795288.6001] device (eth1): Activation: successful, device activated.
Nov 22 07:08:18 np0005531992.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 07:08:23 np0005531992.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 35356:11: disconnected by user
Nov 22 07:08:23 np0005531992.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 35356
Nov 22 07:08:23 np0005531992.novalocal sshd-session[4305]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:08:23 np0005531992.novalocal systemd-logind[826]: Session 1 logged out. Waiting for processes to exit.
Nov 22 07:08:28 np0005531992.novalocal sshd-session[7298]: Accepted publickey for zuul from 38.102.83.114 port 57448 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 07:08:28 np0005531992.novalocal systemd-logind[826]: New session 3 of user zuul.
Nov 22 07:08:28 np0005531992.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 22 07:08:28 np0005531992.novalocal sshd-session[7298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:08:29 np0005531992.novalocal sudo[7377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgfifojaxdvwtthigvifqgqnidddabxd ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 07:08:29 np0005531992.novalocal sudo[7377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:08:29 np0005531992.novalocal python3[7379]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:08:29 np0005531992.novalocal sudo[7377]: pam_unix(sudo:session): session closed for user root
Nov 22 07:08:29 np0005531992.novalocal sudo[7450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueiihuyjntkzttzojlduksjfmnqkrrnv ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 07:08:29 np0005531992.novalocal sudo[7450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:08:29 np0005531992.novalocal python3[7452]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763795309.03222-259-39750108988884/source _original_basename=tmp29o0ds37 follow=False checksum=c2fcdefbd55a85f420f3b8a84215926abaa80ee6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:08:29 np0005531992.novalocal sudo[7450]: pam_unix(sudo:session): session closed for user root
Nov 22 07:08:32 np0005531992.novalocal sshd-session[7301]: Connection closed by 38.102.83.114 port 57448
Nov 22 07:08:32 np0005531992.novalocal sshd-session[7298]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:08:32 np0005531992.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 22 07:08:32 np0005531992.novalocal systemd-logind[826]: Session 3 logged out. Waiting for processes to exit.
Nov 22 07:08:32 np0005531992.novalocal systemd-logind[826]: Removed session 3.
Nov 22 07:10:51 np0005531992.novalocal systemd[4309]: Created slice User Background Tasks Slice.
Nov 22 07:10:51 np0005531992.novalocal systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 07:10:51 np0005531992.novalocal systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 07:16:51 np0005531992.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Nov 22 07:16:51 np0005531992.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 22 07:16:51 np0005531992.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Nov 22 07:16:51 np0005531992.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 22 07:17:00 np0005531992.novalocal sshd-session[7485]: Accepted publickey for zuul from 38.102.83.114 port 52944 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 07:17:00 np0005531992.novalocal systemd-logind[826]: New session 4 of user zuul.
Nov 22 07:17:00 np0005531992.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 22 07:17:00 np0005531992.novalocal sshd-session[7485]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:17:00 np0005531992.novalocal sudo[7512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imdqkiwucuyumdpnybqerdqyevkcqlqd ; /usr/bin/python3'
Nov 22 07:17:00 np0005531992.novalocal sudo[7512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:00 np0005531992.novalocal python3[7514]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-8af7-a90d-000000001cf0-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:17:00 np0005531992.novalocal sudo[7512]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:00 np0005531992.novalocal sudo[7541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acclputbstzzmaosjfcnfbbgqqrbeusa ; /usr/bin/python3'
Nov 22 07:17:00 np0005531992.novalocal sudo[7541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:00 np0005531992.novalocal python3[7543]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:17:00 np0005531992.novalocal sudo[7541]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:00 np0005531992.novalocal sudo[7567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubqfmjopfpxyasoxtjpktmdepubvdwj ; /usr/bin/python3'
Nov 22 07:17:00 np0005531992.novalocal sudo[7567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:01 np0005531992.novalocal python3[7569]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:17:01 np0005531992.novalocal sudo[7567]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:01 np0005531992.novalocal sudo[7593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwgvjbldznjndnymdohvdhnhacvnrdxn ; /usr/bin/python3'
Nov 22 07:17:01 np0005531992.novalocal sudo[7593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:01 np0005531992.novalocal python3[7595]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:17:01 np0005531992.novalocal sudo[7593]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:01 np0005531992.novalocal sudo[7619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxtfmmruedyjwqvbnhnzwluhgjfmtrfl ; /usr/bin/python3'
Nov 22 07:17:01 np0005531992.novalocal sudo[7619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:01 np0005531992.novalocal python3[7621]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:17:01 np0005531992.novalocal sudo[7619]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:02 np0005531992.novalocal sudo[7645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwubnzwjvlkbdhcuordennypgrxkzqle ; /usr/bin/python3'
Nov 22 07:17:02 np0005531992.novalocal sudo[7645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:02 np0005531992.novalocal python3[7647]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:17:02 np0005531992.novalocal sudo[7645]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:02 np0005531992.novalocal sudo[7723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gylmadangcrroievczpuyoqhqdngncmm ; /usr/bin/python3'
Nov 22 07:17:02 np0005531992.novalocal sudo[7723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:02 np0005531992.novalocal python3[7725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:17:02 np0005531992.novalocal sudo[7723]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:03 np0005531992.novalocal sudo[7796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmcglxsrngzzbvwrxhjdpuclxdtvzjv ; /usr/bin/python3'
Nov 22 07:17:03 np0005531992.novalocal sudo[7796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:03 np0005531992.novalocal python3[7798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763795822.532849-497-153640750087332/source _original_basename=tmpqep4jrpt follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:17:03 np0005531992.novalocal sudo[7796]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:03 np0005531992.novalocal sudo[7846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufylsocvfysetwgzymwocpeukokfucdy ; /usr/bin/python3'
Nov 22 07:17:03 np0005531992.novalocal sudo[7846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:04 np0005531992.novalocal python3[7848]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 07:17:04 np0005531992.novalocal systemd[1]: Reloading.
Nov 22 07:17:04 np0005531992.novalocal systemd-rc-local-generator[7869]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:17:04 np0005531992.novalocal sudo[7846]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:05 np0005531992.novalocal sudo[7902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxbfjoknvqexbxrgozqeufaxjokbdent ; /usr/bin/python3'
Nov 22 07:17:05 np0005531992.novalocal sudo[7902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:05 np0005531992.novalocal python3[7904]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 22 07:17:05 np0005531992.novalocal sudo[7902]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:06 np0005531992.novalocal sudo[7928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovpnwfnbjtnoemcmekzsnvcnzqgezmvl ; /usr/bin/python3'
Nov 22 07:17:06 np0005531992.novalocal sudo[7928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:06 np0005531992.novalocal python3[7930]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:17:06 np0005531992.novalocal sudo[7928]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:06 np0005531992.novalocal sudo[7956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vktvryeucjcifrpffmilyflyahqzmowd ; /usr/bin/python3'
Nov 22 07:17:06 np0005531992.novalocal sudo[7956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:06 np0005531992.novalocal python3[7958]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:17:06 np0005531992.novalocal sudo[7956]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:06 np0005531992.novalocal sudo[7984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbepnrmhpljycqyzixmxjqxzknurrhon ; /usr/bin/python3'
Nov 22 07:17:06 np0005531992.novalocal sudo[7984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:06 np0005531992.novalocal python3[7986]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:17:06 np0005531992.novalocal sudo[7984]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:06 np0005531992.novalocal sudo[8012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inzcfyfwtlwncceriqyvzxfviutjaguz ; /usr/bin/python3'
Nov 22 07:17:06 np0005531992.novalocal sudo[8012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:07 np0005531992.novalocal python3[8014]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:17:07 np0005531992.novalocal sudo[8012]: pam_unix(sudo:session): session closed for user root
Nov 22 07:17:07 np0005531992.novalocal python3[8041]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-8af7-a90d-000000001cf7-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:17:08 np0005531992.novalocal python3[8071]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 07:17:09 np0005531992.novalocal sshd-session[7488]: Connection closed by 38.102.83.114 port 52944
Nov 22 07:17:09 np0005531992.novalocal sshd-session[7485]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:17:09 np0005531992.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 22 07:17:09 np0005531992.novalocal systemd[1]: session-4.scope: Consumed 4.025s CPU time.
Nov 22 07:17:09 np0005531992.novalocal systemd-logind[826]: Session 4 logged out. Waiting for processes to exit.
Nov 22 07:17:09 np0005531992.novalocal systemd-logind[826]: Removed session 4.
Nov 22 07:17:11 np0005531992.novalocal sshd-session[8077]: Accepted publickey for zuul from 38.102.83.114 port 48862 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 07:17:11 np0005531992.novalocal systemd-logind[826]: New session 5 of user zuul.
Nov 22 07:17:11 np0005531992.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 22 07:17:11 np0005531992.novalocal sshd-session[8077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:17:11 np0005531992.novalocal sudo[8104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaxernfdpbdgmpqhlepovcbvrnrmrqlu ; /usr/bin/python3'
Nov 22 07:17:11 np0005531992.novalocal sudo[8104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:17:11 np0005531992.novalocal python3[8106]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 07:17:33 np0005531992.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 07:17:42 np0005531992.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 07:17:52 np0005531992.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 07:17:53 np0005531992.novalocal setsebool[8175]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 22 07:17:53 np0005531992.novalocal setsebool[8175]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 07:18:07 np0005531992.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 07:18:32 np0005531992.novalocal dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 07:18:33 np0005531992.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 07:18:33 np0005531992.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 22 07:18:33 np0005531992.novalocal systemd[1]: Reloading.
Nov 22 07:18:33 np0005531992.novalocal systemd-rc-local-generator[8932]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:18:33 np0005531992.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 07:18:35 np0005531992.novalocal sudo[8104]: pam_unix(sudo:session): session closed for user root
Nov 22 07:18:36 np0005531992.novalocal python3[11618]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-9c05-236f-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:18:38 np0005531992.novalocal kernel: evm: overlay not supported
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: Starting D-Bus User Message Bus...
Nov 22 07:18:38 np0005531992.novalocal dbus-broker-launch[12511]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 22 07:18:38 np0005531992.novalocal dbus-broker-launch[12511]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: Started D-Bus User Message Bus.
Nov 22 07:18:38 np0005531992.novalocal dbus-broker-lau[12511]: Ready
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: Created slice Slice /user.
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: podman-12280.scope: unit configures an IP firewall, but not running as root.
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: Started podman-12280.scope.
Nov 22 07:18:38 np0005531992.novalocal systemd[4309]: Started podman-pause-8fb455de.scope.
Nov 22 07:18:38 np0005531992.novalocal sudo[12853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxwapffjamzfatxacvxyclcxmwvlrxob ; /usr/bin/python3'
Nov 22 07:18:38 np0005531992.novalocal sudo[12853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:18:38 np0005531992.novalocal python3[12874]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.222:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.222:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:18:38 np0005531992.novalocal python3[12874]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 22 07:18:39 np0005531992.novalocal sudo[12853]: pam_unix(sudo:session): session closed for user root
Nov 22 07:18:39 np0005531992.novalocal sshd-session[8080]: Connection closed by 38.102.83.114 port 48862
Nov 22 07:18:39 np0005531992.novalocal sshd-session[8077]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:18:39 np0005531992.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 22 07:18:39 np0005531992.novalocal systemd[1]: session-5.scope: Consumed 1min 2.131s CPU time.
Nov 22 07:18:39 np0005531992.novalocal systemd-logind[826]: Session 5 logged out. Waiting for processes to exit.
Nov 22 07:18:39 np0005531992.novalocal systemd-logind[826]: Removed session 5.
Nov 22 07:19:01 np0005531992.novalocal sshd-session[19422]: Connection closed by 38.129.56.128 port 55556 [preauth]
Nov 22 07:19:01 np0005531992.novalocal sshd-session[19423]: Unable to negotiate with 38.129.56.128 port 55580: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 22 07:19:01 np0005531992.novalocal sshd-session[19428]: Connection closed by 38.129.56.128 port 55570 [preauth]
Nov 22 07:19:01 np0005531992.novalocal sshd-session[19426]: Unable to negotiate with 38.129.56.128 port 55584: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 22 07:19:01 np0005531992.novalocal sshd-session[19429]: Unable to negotiate with 38.129.56.128 port 55600: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 22 07:19:07 np0005531992.novalocal sshd-session[21089]: Accepted publickey for zuul from 38.102.83.114 port 50710 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 07:19:07 np0005531992.novalocal systemd-logind[826]: New session 6 of user zuul.
Nov 22 07:19:07 np0005531992.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 22 07:19:07 np0005531992.novalocal sshd-session[21089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:19:07 np0005531992.novalocal python3[21178]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDm7A2InWKgBNTutTlNcFOsqBEYldIg67lAwWKAy1MFDVWMcNE8fA+N5nYmLMSTphanqoXuPVO+UyG7f4C6/SUA= zuul@np0005531991.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:19:07 np0005531992.novalocal sudo[21389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kggckoqnqbxjrinreccnhebygmrcwkjt ; /usr/bin/python3'
Nov 22 07:19:07 np0005531992.novalocal sudo[21389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:19:08 np0005531992.novalocal python3[21396]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDm7A2InWKgBNTutTlNcFOsqBEYldIg67lAwWKAy1MFDVWMcNE8fA+N5nYmLMSTphanqoXuPVO+UyG7f4C6/SUA= zuul@np0005531991.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:19:08 np0005531992.novalocal sudo[21389]: pam_unix(sudo:session): session closed for user root
Nov 22 07:19:08 np0005531992.novalocal sudo[21815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elduarnupuohpouijjvzqeynigvocuit ; /usr/bin/python3'
Nov 22 07:19:08 np0005531992.novalocal sudo[21815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:19:09 np0005531992.novalocal python3[21821]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005531992.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 22 07:19:09 np0005531992.novalocal useradd[21875]: new group: name=cloud-admin, GID=1002
Nov 22 07:19:09 np0005531992.novalocal useradd[21875]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 22 07:19:10 np0005531992.novalocal sudo[21815]: pam_unix(sudo:session): session closed for user root
Nov 22 07:19:10 np0005531992.novalocal sudo[22242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhboabiuohqrrglpzkjhjsxvttszrklm ; /usr/bin/python3'
Nov 22 07:19:10 np0005531992.novalocal sudo[22242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:19:10 np0005531992.novalocal python3[22244]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDm7A2InWKgBNTutTlNcFOsqBEYldIg67lAwWKAy1MFDVWMcNE8fA+N5nYmLMSTphanqoXuPVO+UyG7f4C6/SUA= zuul@np0005531991.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 07:19:10 np0005531992.novalocal sudo[22242]: pam_unix(sudo:session): session closed for user root
Nov 22 07:19:10 np0005531992.novalocal sudo[22356]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-napijxzqbtanbdnczwfgpcbpvliadaps ; /usr/bin/python3'
Nov 22 07:19:10 np0005531992.novalocal sudo[22356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:19:10 np0005531992.novalocal python3[22361]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:19:10 np0005531992.novalocal sudo[22356]: pam_unix(sudo:session): session closed for user root
Nov 22 07:19:11 np0005531992.novalocal sudo[22553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkazkdciefgdukoyvjwkcvlkvbalrtsr ; /usr/bin/python3'
Nov 22 07:19:11 np0005531992.novalocal sudo[22553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:19:11 np0005531992.novalocal python3[22558]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763795950.601632-135-120353729165244/source _original_basename=tmp66pyhgit follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:19:11 np0005531992.novalocal sudo[22553]: pam_unix(sudo:session): session closed for user root
Nov 22 07:19:12 np0005531992.novalocal sudo[22798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsrtoxidamglcmqlywarwaawpdtqluwy ; /usr/bin/python3'
Nov 22 07:19:12 np0005531992.novalocal sudo[22798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:19:12 np0005531992.novalocal python3[22808]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 22 07:19:12 np0005531992.novalocal systemd[1]: Starting Hostname Service...
Nov 22 07:19:12 np0005531992.novalocal systemd[1]: Started Hostname Service.
Nov 22 07:19:12 np0005531992.novalocal systemd-hostnamed[22943]: Changed pretty hostname to 'compute-0'
Nov 22 07:19:12 compute-0 systemd-hostnamed[22943]: Hostname set to <compute-0> (static)
Nov 22 07:19:12 compute-0 NetworkManager[7200]: <info>  [1763795952.3735] hostname: static hostname changed from "np0005531992.novalocal" to "compute-0"
Nov 22 07:19:12 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 07:19:12 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 07:19:12 compute-0 sudo[22798]: pam_unix(sudo:session): session closed for user root
Nov 22 07:19:12 compute-0 sshd-session[21142]: Connection closed by 38.102.83.114 port 50710
Nov 22 07:19:12 compute-0 sshd-session[21089]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:19:12 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 22 07:19:12 compute-0 systemd[1]: session-6.scope: Consumed 2.225s CPU time.
Nov 22 07:19:13 compute-0 systemd-logind[826]: Session 6 logged out. Waiting for processes to exit.
Nov 22 07:19:13 compute-0 systemd-logind[826]: Removed session 6.
Nov 22 07:19:22 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 07:19:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 07:19:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 07:19:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 52.802s CPU time.
Nov 22 07:19:33 compute-0 systemd[1]: run-r520ccb955c1c4778b1cd0f9511313dc5.service: Deactivated successfully.
Nov 22 07:19:42 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 07:22:00 compute-0 sshd-session[29933]: Connection closed by 80.94.92.164 port 47576
Nov 22 07:26:54 compute-0 sshd-session[29938]: Accepted publickey for zuul from 38.129.56.128 port 45838 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 07:26:54 compute-0 systemd-logind[826]: New session 7 of user zuul.
Nov 22 07:26:54 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 22 07:26:54 compute-0 sshd-session[29938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:26:54 compute-0 python3[30014]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:26:56 compute-0 sudo[30128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkkewnqleunecjszxsbmkmsrhbwzeoo ; /usr/bin/python3'
Nov 22 07:26:56 compute-0 sudo[30128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:56 compute-0 python3[30130]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:26:56 compute-0 sudo[30128]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:56 compute-0 sudo[30201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfwiqxjldvgmisemhhukiwbiiadgvgqs ; /usr/bin/python3'
Nov 22 07:26:56 compute-0 sudo[30201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:57 compute-0 python3[30203]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763796416.1305203-33574-85762469470805/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:26:57 compute-0 sudo[30201]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:57 compute-0 sudo[30227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwutvayqdqyiiciulljtvitcljjmvela ; /usr/bin/python3'
Nov 22 07:26:57 compute-0 sudo[30227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:57 compute-0 python3[30229]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:26:57 compute-0 sudo[30227]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:57 compute-0 sudo[30300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jixnkhrazaskxwuvosndpmikgesnljll ; /usr/bin/python3'
Nov 22 07:26:57 compute-0 sudo[30300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:57 compute-0 python3[30302]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763796416.1305203-33574-85762469470805/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:26:57 compute-0 sudo[30300]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:57 compute-0 sudo[30326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vifeccwsoqjsixegpaostppbryfixkfc ; /usr/bin/python3'
Nov 22 07:26:57 compute-0 sudo[30326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:58 compute-0 python3[30328]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:26:58 compute-0 sudo[30326]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:58 compute-0 sudo[30399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gytaopmaoyxiidimjmlletuompyugevw ; /usr/bin/python3'
Nov 22 07:26:58 compute-0 sudo[30399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:58 compute-0 python3[30401]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763796416.1305203-33574-85762469470805/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:26:58 compute-0 sudo[30399]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:58 compute-0 sudo[30425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykeitnrrkopqwxowanytxnsvxzoljihz ; /usr/bin/python3'
Nov 22 07:26:58 compute-0 sudo[30425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:58 compute-0 python3[30427]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:26:58 compute-0 sudo[30425]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:58 compute-0 sudo[30498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwrvvjgzwvxrhlypzzqwvbmzsvypfjy ; /usr/bin/python3'
Nov 22 07:26:58 compute-0 sudo[30498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:59 compute-0 python3[30500]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763796416.1305203-33574-85762469470805/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:26:59 compute-0 sudo[30498]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:59 compute-0 sudo[30524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjeoekceootyhoeuhhkumyvyakejhctt ; /usr/bin/python3'
Nov 22 07:26:59 compute-0 sudo[30524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:59 compute-0 python3[30526]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:26:59 compute-0 sudo[30524]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:59 compute-0 sudo[30597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clwztlvihqrotrdlekyxkyiihyqeijfa ; /usr/bin/python3'
Nov 22 07:26:59 compute-0 sudo[30597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:26:59 compute-0 python3[30599]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763796416.1305203-33574-85762469470805/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:26:59 compute-0 sudo[30597]: pam_unix(sudo:session): session closed for user root
Nov 22 07:26:59 compute-0 sudo[30623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxctiuqstprqcfbtfktuvwnvtlpoekcr ; /usr/bin/python3'
Nov 22 07:26:59 compute-0 sudo[30623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:27:00 compute-0 python3[30625]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:27:00 compute-0 sudo[30623]: pam_unix(sudo:session): session closed for user root
Nov 22 07:27:00 compute-0 sudo[30696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxanrjsnzcezhezcsyxpapytahvrbuec ; /usr/bin/python3'
Nov 22 07:27:00 compute-0 sudo[30696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:27:00 compute-0 python3[30698]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763796416.1305203-33574-85762469470805/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:27:00 compute-0 sudo[30696]: pam_unix(sudo:session): session closed for user root
Nov 22 07:27:00 compute-0 sudo[30722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnbqavgztlpllsnaolpaqrtywfbcisjy ; /usr/bin/python3'
Nov 22 07:27:00 compute-0 sudo[30722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:27:01 compute-0 python3[30724]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 07:27:01 compute-0 sudo[30722]: pam_unix(sudo:session): session closed for user root
Nov 22 07:27:01 compute-0 sudo[30795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqrwhutasiuexjhcjoxemgfbuucywayz ; /usr/bin/python3'
Nov 22 07:27:01 compute-0 sudo[30795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:27:01 compute-0 python3[30797]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763796416.1305203-33574-85762469470805/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:27:01 compute-0 sudo[30795]: pam_unix(sudo:session): session closed for user root
Nov 22 07:27:03 compute-0 sshd-session[30822]: Unable to negotiate with 192.168.122.11 port 46948: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 22 07:27:03 compute-0 sshd-session[30823]: Connection closed by 192.168.122.11 port 46908 [preauth]
Nov 22 07:27:03 compute-0 sshd-session[30824]: Connection closed by 192.168.122.11 port 46918 [preauth]
Nov 22 07:27:03 compute-0 sshd-session[30826]: Unable to negotiate with 192.168.122.11 port 46922: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 22 07:27:03 compute-0 sshd-session[30825]: Unable to negotiate with 192.168.122.11 port 46936: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 22 07:29:57 compute-0 sshd-session[30833]: Invalid user vyos from 80.94.92.164 port 50974
Nov 22 07:29:57 compute-0 sshd-session[30833]: Connection closed by invalid user vyos 80.94.92.164 port 50974 [preauth]
Nov 22 07:30:04 compute-0 python3[30858]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:35:04 compute-0 sshd-session[29941]: Received disconnect from 38.129.56.128 port 45838:11: disconnected by user
Nov 22 07:35:04 compute-0 sshd-session[29941]: Disconnected from user zuul 38.129.56.128 port 45838
Nov 22 07:35:04 compute-0 sshd-session[29938]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:35:04 compute-0 systemd-logind[826]: Session 7 logged out. Waiting for processes to exit.
Nov 22 07:35:04 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 22 07:35:04 compute-0 systemd[1]: session-7.scope: Consumed 5.275s CPU time.
Nov 22 07:35:04 compute-0 systemd-logind[826]: Removed session 7.
Nov 22 07:35:12 compute-0 sshd-session[30863]: Invalid user mapr from 80.94.92.164 port 53446
Nov 22 07:35:13 compute-0 sshd-session[30863]: Connection closed by invalid user mapr 80.94.92.164 port 53446 [preauth]
Nov 22 07:40:20 compute-0 sshd-session[30868]: Invalid user oneadmin from 80.94.92.164 port 55966
Nov 22 07:40:20 compute-0 sshd-session[30868]: Connection closed by invalid user oneadmin 80.94.92.164 port 55966 [preauth]
Nov 22 07:45:26 compute-0 sshd-session[30873]: Invalid user master from 80.94.92.164 port 58466
Nov 22 07:45:26 compute-0 sshd-session[30873]: Connection closed by invalid user master 80.94.92.164 port 58466 [preauth]
Nov 22 07:50:14 compute-0 sshd-session[30877]: Invalid user loginuser from 80.94.92.164 port 60932
Nov 22 07:50:14 compute-0 sshd-session[30877]: Connection closed by invalid user loginuser 80.94.92.164 port 60932 [preauth]
Nov 22 07:54:22 compute-0 sshd-session[30880]: Accepted publickey for zuul from 192.168.122.30 port 40368 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 07:54:22 compute-0 systemd-logind[826]: New session 8 of user zuul.
Nov 22 07:54:22 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 22 07:54:22 compute-0 sshd-session[30880]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:54:23 compute-0 python3.9[31033]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:54:25 compute-0 sudo[31212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbzqoizdtyavdjyoyofavjvmlfnpbuqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798064.5757267-32-239028347318576/AnsiballZ_command.py'
Nov 22 07:54:25 compute-0 sudo[31212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:54:25 compute-0 python3.9[31214]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:54:33 compute-0 sudo[31212]: pam_unix(sudo:session): session closed for user root
Nov 22 07:54:33 compute-0 sshd-session[30883]: Connection closed by 192.168.122.30 port 40368
Nov 22 07:54:33 compute-0 sshd-session[30880]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:54:33 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 22 07:54:33 compute-0 systemd[1]: session-8.scope: Consumed 8.036s CPU time.
Nov 22 07:54:33 compute-0 systemd-logind[826]: Session 8 logged out. Waiting for processes to exit.
Nov 22 07:54:33 compute-0 systemd-logind[826]: Removed session 8.
Nov 22 07:54:39 compute-0 sshd-session[31273]: Accepted publickey for zuul from 192.168.122.30 port 44806 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 07:54:39 compute-0 systemd-logind[826]: New session 9 of user zuul.
Nov 22 07:54:39 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 22 07:54:39 compute-0 sshd-session[31273]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:54:40 compute-0 python3.9[31426]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:54:40 compute-0 sshd-session[31276]: Connection closed by 192.168.122.30 port 44806
Nov 22 07:54:40 compute-0 sshd-session[31273]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:54:40 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 22 07:54:40 compute-0 systemd-logind[826]: Session 9 logged out. Waiting for processes to exit.
Nov 22 07:54:40 compute-0 systemd-logind[826]: Removed session 9.
Nov 22 07:54:47 compute-0 sshd-session[31455]: Invalid user loginuser from 80.94.92.164 port 35178
Nov 22 07:54:47 compute-0 sshd-session[31455]: Connection closed by invalid user loginuser 80.94.92.164 port 35178 [preauth]
Nov 22 07:54:58 compute-0 sshd-session[31457]: Accepted publickey for zuul from 192.168.122.30 port 36194 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 07:54:58 compute-0 systemd-logind[826]: New session 10 of user zuul.
Nov 22 07:54:58 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 22 07:54:58 compute-0 sshd-session[31457]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:54:59 compute-0 python3.9[31610]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 07:55:00 compute-0 python3.9[31784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:55:01 compute-0 sudo[31934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwzlvakaflmkygntnxsyfsvdoayzdrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798100.594438-45-1541565277096/AnsiballZ_command.py'
Nov 22 07:55:01 compute-0 sudo[31934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:01 compute-0 python3.9[31936]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:55:01 compute-0 sudo[31934]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:01 compute-0 sudo[32087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktpfwctoqdtdpjathzwlvkpimsynavxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798101.5622873-57-48536324046863/AnsiballZ_stat.py'
Nov 22 07:55:01 compute-0 sudo[32087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:02 compute-0 python3.9[32089]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 07:55:02 compute-0 sudo[32087]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:02 compute-0 sudo[32239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htlutroksczszvgefdexpmvaiylkwwxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798102.3648832-65-64907046837847/AnsiballZ_file.py'
Nov 22 07:55:02 compute-0 sudo[32239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:03 compute-0 python3.9[32241]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:55:03 compute-0 sudo[32239]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:03 compute-0 sudo[32391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwdoqfdieukqplvaeesqubcfvlekrzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798103.1799052-73-39448474019885/AnsiballZ_stat.py'
Nov 22 07:55:03 compute-0 sudo[32391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:03 compute-0 python3.9[32393]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 07:55:03 compute-0 sudo[32391]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:04 compute-0 sudo[32514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltiozazhcpmlkcxgafauhhfovtamtxmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798103.1799052-73-39448474019885/AnsiballZ_copy.py'
Nov 22 07:55:04 compute-0 sudo[32514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:04 compute-0 python3.9[32516]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798103.1799052-73-39448474019885/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:55:04 compute-0 sudo[32514]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:04 compute-0 sudo[32666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbavcwwegaxzlmrokyivtjfluejzxxre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798104.6038473-88-172315594297727/AnsiballZ_setup.py'
Nov 22 07:55:04 compute-0 sudo[32666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:05 compute-0 python3.9[32668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:55:05 compute-0 sudo[32666]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:05 compute-0 irqbalance[822]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 22 07:55:05 compute-0 irqbalance[822]: IRQ 26 affinity is now unmanaged
Nov 22 07:55:05 compute-0 sudo[32822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vicsfmhkqprwbagwvmkbervhzqfhahwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798105.4901319-96-193177218942441/AnsiballZ_file.py'
Nov 22 07:55:05 compute-0 sudo[32822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:05 compute-0 python3.9[32824]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:55:06 compute-0 sudo[32822]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:06 compute-0 sudo[32974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjjyteslgzoxdmhqthfqsmrzujpfsxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798106.1851134-105-68366552054436/AnsiballZ_file.py'
Nov 22 07:55:06 compute-0 sudo[32974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:06 compute-0 python3.9[32976]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:55:06 compute-0 sudo[32974]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:07 compute-0 python3.9[33126]: ansible-ansible.builtin.service_facts Invoked
Nov 22 07:55:12 compute-0 python3.9[33379]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:55:13 compute-0 python3.9[33529]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:55:14 compute-0 python3.9[33683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:55:14 compute-0 sudo[33839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccrowbweeswfyashnfwrxhgmusgqtxra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798114.5603225-153-49174028554391/AnsiballZ_setup.py'
Nov 22 07:55:14 compute-0 sudo[33839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:15 compute-0 python3.9[33841]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 07:55:15 compute-0 sudo[33839]: pam_unix(sudo:session): session closed for user root
Nov 22 07:55:15 compute-0 sudo[33923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oossepzwnarkqnnjerukwramiprpemxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798114.5603225-153-49174028554391/AnsiballZ_dnf.py'
Nov 22 07:55:15 compute-0 sudo[33923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:55:16 compute-0 python3.9[33925]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 07:56:22 compute-0 systemd[1]: Reloading.
Nov 22 07:56:22 compute-0 systemd-rc-local-generator[34125]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:56:22 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 22 07:56:23 compute-0 systemd[1]: Reloading.
Nov 22 07:56:23 compute-0 systemd-rc-local-generator[34167]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:56:23 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 22 07:56:23 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 22 07:56:23 compute-0 systemd[1]: Reloading.
Nov 22 07:56:23 compute-0 systemd-rc-local-generator[34206]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:56:23 compute-0 systemd[1]: Starting dnf makecache...
Nov 22 07:56:23 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 22 07:56:23 compute-0 dnf[34215]: Failed determining last makecache time.
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-barbican-42b4c41831408a8e323 126 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 143 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-cinder-1c00d6490d88e436f26ef 149 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-python-stevedore-c4acc5639fd2329372142 157 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-python-observabilityclient-2f31846d73c 126 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-os-net-config-bbae2ed8a159b0435a473f38 170 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 149 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-python-designate-tests-tempest-347fdbc 163 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-glance-1fd12c29b339f30fe823e 136 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 148 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-manila-3c01b7181572c95dac462 125 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-python-whitebox-neutron-tests-tempest- 125 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dbus-broker-launch[816]: Noticed file-system modification, trigger reload.
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-octavia-ba397f07a7331190208c 128 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dbus-broker-launch[816]: Noticed file-system modification, trigger reload.
Nov 22 07:56:24 compute-0 dbus-broker-launch[816]: Noticed file-system modification, trigger reload.
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-watcher-c014f81a8647287f6dcc 128 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-python-tcib-1124124ec06aadbac34f0d340b 127 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 166 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-swift-dc98a8463506ac520c469a 155 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-python-tempestconf-8515371b7cceebd4282 147 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: delorean-openstack-heat-ui-013accbfd179753bc3f0 115 kB/s | 3.0 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: CentOS Stream 9 - BaseOS                         70 kB/s | 7.3 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: CentOS Stream 9 - AppStream                      47 kB/s | 7.4 kB     00:00
Nov 22 07:56:24 compute-0 dnf[34215]: CentOS Stream 9 - CRB                            84 kB/s | 7.2 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: CentOS Stream 9 - Extras packages                47 kB/s | 8.3 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: dlrn-antelope-testing                           137 kB/s | 3.0 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: dlrn-antelope-build-deps                        170 kB/s | 3.0 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: centos9-rabbitmq                                 76 kB/s | 3.0 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: centos9-storage                                 147 kB/s | 3.0 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: centos9-opstools                                 38 kB/s | 3.0 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: NFV SIG OpenvSwitch                             114 kB/s | 3.0 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: repo-setup-centos-appstream                     148 kB/s | 4.4 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: repo-setup-centos-baseos                        172 kB/s | 3.9 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: repo-setup-centos-highavailability               77 kB/s | 3.9 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: repo-setup-centos-powertools                    163 kB/s | 4.3 kB     00:00
Nov 22 07:56:25 compute-0 dnf[34215]: Extra Packages for Enterprise Linux 9 - x86_64  280 kB/s |  33 kB     00:00
Nov 22 07:56:26 compute-0 dnf[34215]: Metadata cache created.
Nov 22 07:56:26 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 22 07:56:26 compute-0 systemd[1]: Finished dnf makecache.
Nov 22 07:56:26 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.844s CPU time.
Nov 22 07:57:42 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 22 07:57:42 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 07:57:42 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 07:57:42 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 07:57:42 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 07:57:42 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 07:57:42 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 07:57:42 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 07:57:42 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 22 07:57:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 07:57:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 07:57:42 compute-0 systemd[1]: Reloading.
Nov 22 07:57:42 compute-0 systemd-rc-local-generator[34592]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:57:42 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 07:57:43 compute-0 sudo[33923]: pam_unix(sudo:session): session closed for user root
Nov 22 07:57:44 compute-0 sudo[35498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxybsktiqungxfkqzipkucthrircesif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798264.0784059-165-106061277238235/AnsiballZ_command.py'
Nov 22 07:57:44 compute-0 sudo[35498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:57:44 compute-0 python3.9[35500]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:57:46 compute-0 sudo[35498]: pam_unix(sudo:session): session closed for user root
Nov 22 07:57:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 07:57:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 07:57:47 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.107s CPU time.
Nov 22 07:57:47 compute-0 systemd[1]: run-rb36795aeb12740e0b834428fd390c1fe.service: Deactivated successfully.
Nov 22 07:57:47 compute-0 sudo[35780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifkxlfmvrejkajklriogojcctembfbwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798266.8113775-173-174942685348906/AnsiballZ_selinux.py'
Nov 22 07:57:47 compute-0 sudo[35780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:57:47 compute-0 python3.9[35782]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 07:57:47 compute-0 sudo[35780]: pam_unix(sudo:session): session closed for user root
Nov 22 07:57:48 compute-0 sudo[35932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pknlkjchvbjpsygdwlmltfuxiwglhpvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798268.1090312-184-279553460253393/AnsiballZ_command.py'
Nov 22 07:57:48 compute-0 sudo[35932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:57:48 compute-0 python3.9[35934]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 07:57:49 compute-0 sudo[35932]: pam_unix(sudo:session): session closed for user root
Nov 22 07:57:50 compute-0 sudo[36085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peimbjusfrjcohxnkuxjadmnpilkvxxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798269.7877843-192-270614485623274/AnsiballZ_file.py'
Nov 22 07:57:50 compute-0 sudo[36085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:57:53 compute-0 python3.9[36087]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:57:53 compute-0 sudo[36085]: pam_unix(sudo:session): session closed for user root
Nov 22 07:57:54 compute-0 sudo[36237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-artqwknclhkuzidhtuewpkqsmipozyqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798273.878787-200-77872671523233/AnsiballZ_mount.py'
Nov 22 07:57:54 compute-0 sudo[36237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:57:54 compute-0 python3.9[36239]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 07:57:54 compute-0 sudo[36237]: pam_unix(sudo:session): session closed for user root
Nov 22 07:57:55 compute-0 sudo[36389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdwxhvhtmqgerompliuhdyxngnhhwjyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798275.4993434-228-10350404448917/AnsiballZ_file.py'
Nov 22 07:57:55 compute-0 sudo[36389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:02 compute-0 python3.9[36391]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:58:02 compute-0 sudo[36389]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:03 compute-0 sudo[36541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjgmreqihdughlbboxllluzweoynnlcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798283.131738-236-64473209990569/AnsiballZ_stat.py'
Nov 22 07:58:03 compute-0 sudo[36541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:03 compute-0 python3.9[36543]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 07:58:03 compute-0 sudo[36541]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:03 compute-0 sudo[36664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bveorzehmwjjgbqmkbnjuvzksahrrvxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798283.131738-236-64473209990569/AnsiballZ_copy.py'
Nov 22 07:58:03 compute-0 sudo[36664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:04 compute-0 python3.9[36666]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798283.131738-236-64473209990569/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:58:04 compute-0 sudo[36664]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:04 compute-0 sudo[36816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jriigsxggsjdwoyigkzimabmifopalgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798284.618041-260-109220728575486/AnsiballZ_stat.py'
Nov 22 07:58:04 compute-0 sudo[36816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:05 compute-0 python3.9[36818]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 07:58:05 compute-0 sudo[36816]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:05 compute-0 sudo[36968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqqjtsxxxvllazxrwroftbbzyotsbsou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798285.2817092-268-84284474849892/AnsiballZ_command.py'
Nov 22 07:58:05 compute-0 sudo[36968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:05 compute-0 python3.9[36970]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:58:05 compute-0 sudo[36968]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:06 compute-0 sudo[37121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvlyaftazqqyzfgrsrdfsbaqyzczucjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798286.1179419-276-133298940001521/AnsiballZ_file.py'
Nov 22 07:58:06 compute-0 sudo[37121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:06 compute-0 python3.9[37123]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:58:06 compute-0 sudo[37121]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:07 compute-0 sudo[37273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuseqsdvnvksnicnpvgpwywwykqviiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798286.9358463-287-23914547853856/AnsiballZ_getent.py'
Nov 22 07:58:07 compute-0 sudo[37273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:07 compute-0 python3.9[37275]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 07:58:07 compute-0 sudo[37273]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:07 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 07:58:07 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 07:58:08 compute-0 sudo[37427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqhoxvbmjjbxslmyytfqytbkoyrcpbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798287.8931663-295-132638886418117/AnsiballZ_group.py'
Nov 22 07:58:08 compute-0 sudo[37427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:08 compute-0 python3.9[37429]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 07:58:08 compute-0 groupadd[37430]: group added to /etc/group: name=qemu, GID=107
Nov 22 07:58:09 compute-0 groupadd[37430]: group added to /etc/gshadow: name=qemu
Nov 22 07:58:09 compute-0 groupadd[37430]: new group: name=qemu, GID=107
Nov 22 07:58:09 compute-0 sudo[37427]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:09 compute-0 sudo[37585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaeyuvgyqtvhcilegaiiaytggtdpweih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798289.2475362-303-85995117046325/AnsiballZ_user.py'
Nov 22 07:58:09 compute-0 sudo[37585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:09 compute-0 python3.9[37587]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 07:58:10 compute-0 useradd[37589]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 07:58:10 compute-0 sudo[37585]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:11 compute-0 sudo[37745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdpoozpcvsyctpuwwkcrfzarhtbwork ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798290.8349519-311-220325624355615/AnsiballZ_getent.py'
Nov 22 07:58:11 compute-0 sudo[37745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:11 compute-0 python3.9[37747]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 07:58:11 compute-0 sudo[37745]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:11 compute-0 sudo[37898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drdhbojerbnofolwztjvdbyrkormcrzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798291.4520674-319-189585445104619/AnsiballZ_group.py'
Nov 22 07:58:11 compute-0 sudo[37898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:11 compute-0 python3.9[37900]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 07:58:12 compute-0 groupadd[37901]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 22 07:58:12 compute-0 groupadd[37901]: group added to /etc/gshadow: name=hugetlbfs
Nov 22 07:58:12 compute-0 groupadd[37901]: new group: name=hugetlbfs, GID=42477
Nov 22 07:58:12 compute-0 sudo[37898]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:12 compute-0 sudo[38056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otjvtakjlbzlypypoytvxdupuvgrcfng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798292.4995234-328-186802255554188/AnsiballZ_file.py'
Nov 22 07:58:12 compute-0 sudo[38056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:12 compute-0 python3.9[38058]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 07:58:12 compute-0 sudo[38056]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:13 compute-0 sudo[38208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyqhmspjukamllhrucfilpdmhmzcxtab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798293.3812487-339-226671213768808/AnsiballZ_dnf.py'
Nov 22 07:58:13 compute-0 sudo[38208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:14 compute-0 python3.9[38210]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 07:58:18 compute-0 sudo[38208]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:18 compute-0 sudo[38361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpzdrrtvbndreyuspqdwongncamqszkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798298.342246-347-160865716850418/AnsiballZ_file.py'
Nov 22 07:58:18 compute-0 sudo[38361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:18 compute-0 python3.9[38363]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:58:18 compute-0 sudo[38361]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:19 compute-0 sudo[38513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmkswqbbrucgrdvxerthuwgcadsueuay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798299.1196404-355-89612413125856/AnsiballZ_stat.py'
Nov 22 07:58:19 compute-0 sudo[38513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:19 compute-0 python3.9[38515]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 07:58:19 compute-0 sudo[38513]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:19 compute-0 sudo[38636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jefzabmounaecjykqziceptvqnbujdgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798299.1196404-355-89612413125856/AnsiballZ_copy.py'
Nov 22 07:58:19 compute-0 sudo[38636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:20 compute-0 python3.9[38638]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798299.1196404-355-89612413125856/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:58:20 compute-0 sudo[38636]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:21 compute-0 sudo[38788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksitufxujhtitclokhxmhgfcxhmnopkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798300.769423-370-219506764229709/AnsiballZ_systemd.py'
Nov 22 07:58:21 compute-0 sudo[38788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:21 compute-0 python3.9[38790]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 07:58:21 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 07:58:21 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 22 07:58:21 compute-0 kernel: Bridge firewalling registered
Nov 22 07:58:21 compute-0 systemd-modules-load[38794]: Inserted module 'br_netfilter'
Nov 22 07:58:21 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 07:58:21 compute-0 sudo[38788]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:22 compute-0 sudo[38947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bakfmvtayebrmtppmanigrkpbuikcmbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798302.701932-378-93520339202418/AnsiballZ_stat.py'
Nov 22 07:58:22 compute-0 sudo[38947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:23 compute-0 python3.9[38949]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 07:58:23 compute-0 sudo[38947]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:23 compute-0 sudo[39070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maufculeniknrpdugsqavacavrorsnjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798302.701932-378-93520339202418/AnsiballZ_copy.py'
Nov 22 07:58:23 compute-0 sudo[39070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:23 compute-0 python3.9[39072]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798302.701932-378-93520339202418/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:58:23 compute-0 sudo[39070]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:24 compute-0 sudo[39222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzfykixxtjoyhgrskkrdhndunmmervna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798304.1337976-396-60180270137813/AnsiballZ_dnf.py'
Nov 22 07:58:24 compute-0 sudo[39222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:24 compute-0 python3.9[39224]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 07:58:36 compute-0 dbus-broker-launch[816]: Noticed file-system modification, trigger reload.
Nov 22 07:58:36 compute-0 dbus-broker-launch[816]: Noticed file-system modification, trigger reload.
Nov 22 07:58:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 07:58:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 07:58:38 compute-0 systemd[1]: Reloading.
Nov 22 07:58:38 compute-0 systemd-rc-local-generator[39283]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:58:38 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 07:58:42 compute-0 sudo[39222]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:43 compute-0 python3.9[42618]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 07:58:44 compute-0 python3.9[43110]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 07:58:45 compute-0 python3.9[43260]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 07:58:46 compute-0 sudo[43410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktmqjrvruwixlerljckhhhjvzrnpykqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798325.9961772-435-198511614736637/AnsiballZ_command.py'
Nov 22 07:58:46 compute-0 sudo[43410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:46 compute-0 python3.9[43412]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:58:46 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 07:58:47 compute-0 systemd[1]: Starting Authorization Manager...
Nov 22 07:58:47 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 07:58:47 compute-0 polkitd[43629]: Started polkitd version 0.117
Nov 22 07:58:47 compute-0 polkitd[43629]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 07:58:47 compute-0 polkitd[43629]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 07:58:47 compute-0 polkitd[43629]: Finished loading, compiling and executing 2 rules
Nov 22 07:58:47 compute-0 systemd[1]: Started Authorization Manager.
Nov 22 07:58:47 compute-0 polkitd[43629]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 22 07:58:47 compute-0 sudo[43410]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:48 compute-0 sudo[43797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euwxguwynqdgethudionirpofmxtaasn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798327.8434515-444-143576434478364/AnsiballZ_systemd.py'
Nov 22 07:58:48 compute-0 sudo[43797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:48 compute-0 python3.9[43799]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 07:58:48 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 07:58:48 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 07:58:48 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 07:58:48 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 07:58:49 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 07:58:49 compute-0 sudo[43797]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:49 compute-0 python3.9[43961]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 07:58:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 07:58:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 07:58:51 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.920s CPU time.
Nov 22 07:58:51 compute-0 systemd[1]: run-rc941ba60665e4380a812e745f78541fe.service: Deactivated successfully.
Nov 22 07:58:54 compute-0 sudo[44112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojhwbxhkjfwfjcsfbvsaouoqzbkxcgdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798333.5188901-501-147802974778958/AnsiballZ_systemd.py'
Nov 22 07:58:54 compute-0 sudo[44112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:54 compute-0 python3.9[44114]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 07:58:54 compute-0 systemd[1]: Reloading.
Nov 22 07:58:54 compute-0 systemd-rc-local-generator[44145]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:58:54 compute-0 sudo[44112]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:55 compute-0 sudo[44302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dckewwomubufjovhjsjzielrwqtkglps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798334.8234046-501-121573639250659/AnsiballZ_systemd.py'
Nov 22 07:58:55 compute-0 sudo[44302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:55 compute-0 python3.9[44304]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 07:58:55 compute-0 systemd[1]: Reloading.
Nov 22 07:58:55 compute-0 systemd-rc-local-generator[44333]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 07:58:55 compute-0 sudo[44302]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:56 compute-0 sudo[44490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eypdkfiymkegkukefmdqfqkuuromdzze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798335.9136698-517-131540035194772/AnsiballZ_command.py'
Nov 22 07:58:56 compute-0 sudo[44490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:56 compute-0 python3.9[44492]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:58:56 compute-0 sudo[44490]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:56 compute-0 sudo[44643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-romphiaakpsuncddiogpejbeixmfaati ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798336.581698-525-241276226492800/AnsiballZ_command.py'
Nov 22 07:58:56 compute-0 sudo[44643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:57 compute-0 python3.9[44645]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:58:57 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 22 07:58:57 compute-0 sudo[44643]: pam_unix(sudo:session): session closed for user root
Nov 22 07:58:57 compute-0 sudo[44796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxnltyltqgssmzqarqnrxnsuflyhdilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798337.253834-533-62217593230284/AnsiballZ_command.py'
Nov 22 07:58:57 compute-0 sudo[44796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:58:57 compute-0 python3.9[44798]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:58:59 compute-0 sudo[44796]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:00 compute-0 sudo[44958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzotyvipphfwvarbxihgwbebrkqzbhld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798339.8507009-541-158743177388061/AnsiballZ_command.py'
Nov 22 07:59:00 compute-0 sudo[44958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:00 compute-0 python3.9[44960]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:59:00 compute-0 sudo[44958]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:00 compute-0 sudo[45111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhuwsbieuikzlnwurmxmhflmpthrjxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798340.6819456-549-34140496117574/AnsiballZ_systemd.py'
Nov 22 07:59:00 compute-0 sudo[45111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:01 compute-0 python3.9[45113]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 07:59:01 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 07:59:01 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 22 07:59:01 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 22 07:59:01 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 22 07:59:01 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 07:59:01 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 22 07:59:01 compute-0 sudo[45111]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:01 compute-0 sshd-session[31460]: Connection closed by 192.168.122.30 port 36194
Nov 22 07:59:01 compute-0 sshd-session[31457]: pam_unix(sshd:session): session closed for user zuul
Nov 22 07:59:01 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 22 07:59:01 compute-0 systemd[1]: session-10.scope: Consumed 2min 15.749s CPU time.
Nov 22 07:59:01 compute-0 systemd-logind[826]: Session 10 logged out. Waiting for processes to exit.
Nov 22 07:59:01 compute-0 systemd-logind[826]: Removed session 10.
Nov 22 07:59:09 compute-0 sshd-session[45146]: Accepted publickey for zuul from 192.168.122.30 port 42750 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 07:59:09 compute-0 systemd-logind[826]: New session 11 of user zuul.
Nov 22 07:59:09 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 22 07:59:09 compute-0 sshd-session[45146]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 07:59:10 compute-0 sshd-session[45144]: Invalid user loginuser from 80.94.92.164 port 37648
Nov 22 07:59:10 compute-0 sshd-session[45144]: Connection closed by invalid user loginuser 80.94.92.164 port 37648 [preauth]
Nov 22 07:59:10 compute-0 python3.9[45299]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:59:12 compute-0 python3.9[45453]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:59:13 compute-0 sudo[45607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmpesbgxiexwvngxevoiqhiyhyhthgrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798352.7500925-50-214240181785508/AnsiballZ_command.py'
Nov 22 07:59:13 compute-0 sudo[45607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:13 compute-0 python3.9[45609]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:59:13 compute-0 sudo[45607]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:14 compute-0 python3.9[45760]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:59:15 compute-0 sudo[45914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcbbmwvcbocrhtqbzdzsntlrkhsmxacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798354.8525093-70-98700539877200/AnsiballZ_setup.py'
Nov 22 07:59:15 compute-0 sudo[45914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:15 compute-0 python3.9[45916]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 07:59:15 compute-0 sudo[45914]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:16 compute-0 sudo[45998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuikuwjidjsxtapwtnndfmxdkkruevjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798354.8525093-70-98700539877200/AnsiballZ_dnf.py'
Nov 22 07:59:16 compute-0 sudo[45998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:16 compute-0 python3.9[46000]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 07:59:17 compute-0 sudo[45998]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:18 compute-0 sudo[46151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdvvxrouvfmibkotgrjocmbeoarkrlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798358.064556-82-204472391637882/AnsiballZ_setup.py'
Nov 22 07:59:18 compute-0 sudo[46151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:18 compute-0 python3.9[46153]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 07:59:18 compute-0 sudo[46151]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:19 compute-0 sudo[46322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbfhvtauscxdlrrpairnilordkrcfttp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798359.2203386-93-141059399439853/AnsiballZ_file.py'
Nov 22 07:59:19 compute-0 sudo[46322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:19 compute-0 python3.9[46324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:59:19 compute-0 sudo[46322]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:20 compute-0 sudo[46474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdngcjuwnouqxpnljwssdgrxjfbzsbet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798360.1303358-101-141160066787816/AnsiballZ_command.py'
Nov 22 07:59:20 compute-0 sudo[46474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:20 compute-0 python3.9[46476]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 07:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1746588836-merged.mount: Deactivated successfully.
Nov 22 07:59:21 compute-0 podman[46477]: 2025-11-22 07:59:21.412170117 +0000 UTC m=+0.562390869 system refresh
Nov 22 07:59:21 compute-0 sudo[46474]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:22 compute-0 sudo[46638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgowixgcbqhmojreammmkhyzvwbsduq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798361.6227238-109-198975448942171/AnsiballZ_stat.py'
Nov 22 07:59:22 compute-0 sudo[46638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 07:59:22 compute-0 python3.9[46640]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 07:59:22 compute-0 sudo[46638]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:22 compute-0 sudo[46761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqfbcngeoyyjarrbqvnkojefrhwywnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798361.6227238-109-198975448942171/AnsiballZ_copy.py'
Nov 22 07:59:22 compute-0 sudo[46761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:23 compute-0 python3.9[46763]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798361.6227238-109-198975448942171/.source.json follow=False _original_basename=podman_network_config.j2 checksum=bc5cd84523a9676cffc28d6ae9158da9db642c0f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 07:59:23 compute-0 sudo[46761]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:23 compute-0 sudo[46913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zolzkdpuyuzubsrusogprgleturixvcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798363.252395-124-266829343245502/AnsiballZ_stat.py'
Nov 22 07:59:23 compute-0 sudo[46913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:23 compute-0 python3.9[46915]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 07:59:23 compute-0 sudo[46913]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:24 compute-0 sudo[47036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltgxunzbgsmtsfuclsfzvnatxqdjmyko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798363.252395-124-266829343245502/AnsiballZ_copy.py'
Nov 22 07:59:24 compute-0 sudo[47036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:24 compute-0 python3.9[47038]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798363.252395-124-266829343245502/.source.conf follow=False _original_basename=registries.conf.j2 checksum=efc5ad1cbcb8a1754fb5b515c01a51cc6cc54ec1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:59:24 compute-0 sudo[47036]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:25 compute-0 sudo[47188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrjmexzxjtswsjyxirbyiymryeutgfwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798364.5434678-140-203028098751141/AnsiballZ_ini_file.py'
Nov 22 07:59:25 compute-0 sudo[47188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:25 compute-0 python3.9[47190]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:59:25 compute-0 sudo[47188]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:25 compute-0 sudo[47340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vorduwqnzqxbukvyiroqimpdznximwmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798365.6085882-140-209447776448207/AnsiballZ_ini_file.py'
Nov 22 07:59:25 compute-0 sudo[47340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:26 compute-0 python3.9[47342]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:59:26 compute-0 sudo[47340]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:26 compute-0 sudo[47492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpuwkjbcqjuctvwhnzwhmbexfzrtmsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798366.330972-140-73417078232518/AnsiballZ_ini_file.py'
Nov 22 07:59:26 compute-0 sudo[47492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:26 compute-0 python3.9[47494]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:59:26 compute-0 sudo[47492]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:27 compute-0 sudo[47644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unthijdwtsyqyetwomyefdztwnydyiaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798366.94558-140-157143157357629/AnsiballZ_ini_file.py'
Nov 22 07:59:27 compute-0 sudo[47644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:27 compute-0 python3.9[47646]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 07:59:27 compute-0 sudo[47644]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:28 compute-0 python3.9[47796]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 07:59:28 compute-0 sudo[47948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biuxgyrtkdnnfsjmvwdqnlgstklggxkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798368.5801845-180-106159224872689/AnsiballZ_dnf.py'
Nov 22 07:59:28 compute-0 sudo[47948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:29 compute-0 python3.9[47950]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:30 compute-0 sudo[47948]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:30 compute-0 sudo[48101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbsfejfjkonnyofkpfgooebfipqgdyfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798370.5516365-188-258948682984434/AnsiballZ_dnf.py'
Nov 22 07:59:30 compute-0 sudo[48101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:31 compute-0 python3.9[48103]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:33 compute-0 sudo[48101]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:33 compute-0 sudo[48261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crssjdarmsbfvueshivtpoxsviqbxqhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798373.4112594-198-212377387691227/AnsiballZ_dnf.py'
Nov 22 07:59:33 compute-0 sudo[48261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:33 compute-0 python3.9[48263]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:35 compute-0 sudo[48261]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:35 compute-0 sudo[48414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtewwzvfeovtzfjogvdtaxuafykyazuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798375.5161493-207-1825072495561/AnsiballZ_dnf.py'
Nov 22 07:59:35 compute-0 sudo[48414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:36 compute-0 python3.9[48416]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:37 compute-0 sudo[48414]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:38 compute-0 sudo[48567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdzqgdowquuuevyrqzikwyusnvwjmufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798377.8762176-218-5500724023829/AnsiballZ_dnf.py'
Nov 22 07:59:38 compute-0 sudo[48567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:38 compute-0 python3.9[48569]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:40 compute-0 sudo[48567]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:40 compute-0 sudo[48723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afgrbbiolzpyifaeajwqyuhgyqactvwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798380.4810655-226-201376254701834/AnsiballZ_dnf.py'
Nov 22 07:59:40 compute-0 sudo[48723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:41 compute-0 python3.9[48725]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:45 compute-0 sudo[48723]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:45 compute-0 sudo[48892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upxanozktwaliqzscezojazslsuqfmxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798385.5021439-235-72486996340393/AnsiballZ_dnf.py'
Nov 22 07:59:45 compute-0 sudo[48892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:45 compute-0 python3.9[48894]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:47 compute-0 sudo[48892]: pam_unix(sudo:session): session closed for user root
Nov 22 07:59:47 compute-0 sudo[49045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajlushhaddjzvxnsdqtknffleypbvpgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798387.577179-244-134174532765342/AnsiballZ_dnf.py'
Nov 22 07:59:47 compute-0 sudo[49045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 07:59:48 compute-0 python3.9[49047]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 07:59:59 compute-0 sudo[49045]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:00 compute-0 sudo[49381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfvfjxdaupetruzdsspdhohfsavappc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798399.8738708-253-257552119810990/AnsiballZ_dnf.py'
Nov 22 08:00:00 compute-0 sudo[49381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:00 compute-0 python3.9[49383]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 08:00:01 compute-0 sudo[49381]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:03 compute-0 sudo[49537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jecjhphzwkjzoelpushaxyoczvdsxaxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798402.195043-264-254635417930998/AnsiballZ_file.py'
Nov 22 08:00:03 compute-0 sudo[49537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:04 compute-0 python3.9[49539]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:00:04 compute-0 sudo[49537]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:04 compute-0 sudo[49712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofmybqzokrygqzofelkummpjefkiknoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798404.2908962-272-102219246765687/AnsiballZ_stat.py'
Nov 22 08:00:04 compute-0 sudo[49712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:04 compute-0 python3.9[49714]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:00:04 compute-0 sudo[49712]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:05 compute-0 sudo[49835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqgzkcahgbhsezeflahnlcbphqdikkqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798404.2908962-272-102219246765687/AnsiballZ_copy.py'
Nov 22 08:00:05 compute-0 sudo[49835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:05 compute-0 python3.9[49837]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763798404.2908962-272-102219246765687/.source.json _original_basename=.o54floni follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:00:05 compute-0 sudo[49835]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:06 compute-0 sudo[49987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czatodnhosxhpjweiqhdvyhmknrznjxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798405.5231764-290-111158078179931/AnsiballZ_podman_image.py'
Nov 22 08:00:06 compute-0 sudo[49987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:06 compute-0 python3.9[49989]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3541143762-merged.mount: Deactivated successfully.
Nov 22 08:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3541143762-lower\x2dmapped.mount: Deactivated successfully.
Nov 22 08:00:14 compute-0 podman[50001]: 2025-11-22 08:00:14.768317264 +0000 UTC m=+8.489256120 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 08:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:14 compute-0 sudo[49987]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:15 compute-0 sudo[50295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cabbqigubdqivqzaedqspjykaznphjpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798415.3592165-301-195421564855104/AnsiballZ_podman_image.py'
Nov 22 08:00:15 compute-0 sudo[50295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:15 compute-0 python3.9[50297]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:00:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:31 compute-0 podman[50309]: 2025-11-22 08:00:31.259602031 +0000 UTC m=+15.275587211 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:31 compute-0 sudo[50295]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:32 compute-0 sudo[50609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nenqjzalsiqjtltxekeeazuyxfezfuyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798431.8129299-311-76995066549583/AnsiballZ_podman_image.py'
Nov 22 08:00:32 compute-0 sudo[50609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:32 compute-0 python3.9[50611]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:33 compute-0 podman[50623]: 2025-11-22 08:00:33.907427201 +0000 UTC m=+1.573442229 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 08:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:00:34 compute-0 sudo[50609]: pam_unix(sudo:session): session closed for user root
Nov 22 08:00:34 compute-0 sudo[50858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bohxrolnxoxljbtabopnrjbeakohobuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798434.3462844-320-69406570536774/AnsiballZ_podman_image.py'
Nov 22 08:00:34 compute-0 sudo[50858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:00:34 compute-0 python3.9[50860]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:01 compute-0 CROND[50942]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 08:01:01 compute-0 run-parts[50945]: (/etc/cron.hourly) starting 0anacron
Nov 22 08:01:01 compute-0 anacron[50953]: Anacron started on 2025-11-22
Nov 22 08:01:01 compute-0 anacron[50953]: Will run job `cron.daily' in 5 min.
Nov 22 08:01:01 compute-0 anacron[50953]: Will run job `cron.weekly' in 25 min.
Nov 22 08:01:01 compute-0 anacron[50953]: Will run job `cron.monthly' in 45 min.
Nov 22 08:01:01 compute-0 anacron[50953]: Jobs will be executed sequentially
Nov 22 08:01:01 compute-0 run-parts[50955]: (/etc/cron.hourly) finished 0anacron
Nov 22 08:01:01 compute-0 CROND[50941]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 08:01:08 compute-0 podman[50873]: 2025-11-22 08:01:08.05636653 +0000 UTC m=+33.120184325 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 08:01:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:08 compute-0 sudo[50858]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:08 compute-0 sudo[51162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvgfcllsppuihdrehiahuhapdklhfdyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798468.6969492-331-64126818454649/AnsiballZ_podman_image.py'
Nov 22 08:01:08 compute-0 sudo[51162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:09 compute-0 python3.9[51164]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:27 compute-0 podman[51176]: 2025-11-22 08:01:27.475239491 +0000 UTC m=+18.257117403 image pull 9bdd8ae00d8946a2ce2c9113b1770ecde661cc666ba6fcde2c074d087d635114 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 22 08:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:27 compute-0 sudo[51162]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:28 compute-0 sudo[51491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqypdinvmapfhvhgrnolpjznqwnigwji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798487.9304676-331-259105728899539/AnsiballZ_podman_image.py'
Nov 22 08:01:28 compute-0 sudo[51491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:28 compute-0 python3.9[51493]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:01:29 compute-0 podman[51506]: 2025-11-22 08:01:29.488502876 +0000 UTC m=+1.072627216 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 22 08:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:29 compute-0 sudo[51491]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:30 compute-0 sudo[51778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqfvfkssfujurcogdmaseyfadvfdestb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798489.9370313-347-211970539520969/AnsiballZ_podman_image.py'
Nov 22 08:01:30 compute-0 sudo[51778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:30 compute-0 python3.9[51780]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:01:34 compute-0 podman[51792]: 2025-11-22 08:01:34.415309917 +0000 UTC m=+3.945399548 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 22 08:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:34 compute-0 sudo[51778]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:35 compute-0 sudo[52047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwrbmdswwirrgmggeiuxrvtapjkyhkbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798494.757256-347-138927194994453/AnsiballZ_podman_image.py'
Nov 22 08:01:35 compute-0 sudo[52047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:35 compute-0 python3.9[52049]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 22 08:01:40 compute-0 podman[52062]: 2025-11-22 08:01:40.590107772 +0000 UTC m=+5.300081308 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 22 08:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:01:40 compute-0 sudo[52047]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:41 compute-0 sshd-session[45149]: Connection closed by 192.168.122.30 port 42750
Nov 22 08:01:41 compute-0 sshd-session[45146]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:01:41 compute-0 systemd-logind[826]: Session 11 logged out. Waiting for processes to exit.
Nov 22 08:01:41 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 22 08:01:41 compute-0 systemd[1]: session-11.scope: Consumed 2min 21.530s CPU time.
Nov 22 08:01:41 compute-0 systemd-logind[826]: Removed session 11.
Nov 22 08:01:47 compute-0 sshd-session[52310]: Accepted publickey for zuul from 192.168.122.30 port 40708 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:01:47 compute-0 systemd-logind[826]: New session 12 of user zuul.
Nov 22 08:01:47 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 22 08:01:47 compute-0 sshd-session[52310]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:01:48 compute-0 python3.9[52474]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:01:49 compute-0 sudo[52628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nivkadqqqaxqspwpilgbdqorjurcdttt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798509.3484447-36-280520133961206/AnsiballZ_getent.py'
Nov 22 08:01:49 compute-0 sudo[52628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:49 compute-0 python3.9[52630]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 08:01:50 compute-0 sudo[52628]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:50 compute-0 sudo[52781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqwsrjyxzqrohwiblyuyllepjqqtjikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798510.1855843-44-158191510240587/AnsiballZ_group.py'
Nov 22 08:01:50 compute-0 sudo[52781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:50 compute-0 python3.9[52783]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 08:01:50 compute-0 groupadd[52784]: group added to /etc/group: name=openvswitch, GID=42476
Nov 22 08:01:50 compute-0 groupadd[52784]: group added to /etc/gshadow: name=openvswitch
Nov 22 08:01:50 compute-0 groupadd[52784]: new group: name=openvswitch, GID=42476
Nov 22 08:01:50 compute-0 sudo[52781]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:51 compute-0 sudo[52939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxfmwaopxnpknhhgbgehlvxtkypxjegh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798510.9917054-52-49798404883802/AnsiballZ_user.py'
Nov 22 08:01:51 compute-0 sudo[52939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:51 compute-0 python3.9[52941]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 08:01:51 compute-0 useradd[52943]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 08:01:51 compute-0 useradd[52943]: add 'openvswitch' to group 'hugetlbfs'
Nov 22 08:01:51 compute-0 useradd[52943]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 22 08:01:51 compute-0 sudo[52939]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:52 compute-0 sudo[53099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvttxusrnsfqcpeimzcolpwbgabmsomd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798511.953581-62-202071166618250/AnsiballZ_setup.py'
Nov 22 08:01:52 compute-0 sudo[53099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:52 compute-0 python3.9[53101]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:01:52 compute-0 sudo[53099]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:53 compute-0 sudo[53183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcvthhabdmejrolksddickyopsoqwncm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798511.953581-62-202071166618250/AnsiballZ_dnf.py'
Nov 22 08:01:53 compute-0 sudo[53183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:53 compute-0 python3.9[53185]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 08:01:54 compute-0 sudo[53183]: pam_unix(sudo:session): session closed for user root
Nov 22 08:01:55 compute-0 sudo[53345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjimqenpcklpfrnrxcvbebydetjziiav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798515.098312-76-30254299679574/AnsiballZ_dnf.py'
Nov 22 08:01:55 compute-0 sudo[53345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:01:55 compute-0 python3.9[53347]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:02:08 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Nov 22 08:02:08 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 08:02:08 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 08:02:08 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 08:02:08 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 08:02:08 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 08:02:08 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 08:02:08 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 08:02:08 compute-0 groupadd[53372]: group added to /etc/group: name=unbound, GID=993
Nov 22 08:02:08 compute-0 groupadd[53372]: group added to /etc/gshadow: name=unbound
Nov 22 08:02:08 compute-0 groupadd[53372]: new group: name=unbound, GID=993
Nov 22 08:02:08 compute-0 useradd[53379]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 22 08:02:08 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 22 08:02:08 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 22 08:02:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 08:02:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 08:02:09 compute-0 systemd[1]: Reloading.
Nov 22 08:02:09 compute-0 systemd-sysv-generator[53880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:02:09 compute-0 systemd-rc-local-generator[53876]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:02:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 08:02:10 compute-0 sudo[53345]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 08:02:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 08:02:10 compute-0 systemd[1]: run-r65b8558c718b4d7cbba9497ab7307c7c.service: Deactivated successfully.
Nov 22 08:02:11 compute-0 sudo[54446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrorvszbnzcmhwqdsnchwzgqtnrffvpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798530.5738597-84-212325900799769/AnsiballZ_systemd.py'
Nov 22 08:02:11 compute-0 sudo[54446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:11 compute-0 python3.9[54448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:02:11 compute-0 systemd[1]: Reloading.
Nov 22 08:02:11 compute-0 systemd-rc-local-generator[54479]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:02:11 compute-0 systemd-sysv-generator[54483]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:02:11 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 22 08:02:11 compute-0 chown[54490]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 22 08:02:11 compute-0 ovs-ctl[54495]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 22 08:02:11 compute-0 ovs-ctl[54495]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 22 08:02:11 compute-0 ovs-ctl[54495]: Starting ovsdb-server [  OK  ]
Nov 22 08:02:11 compute-0 ovs-vsctl[54544]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 22 08:02:12 compute-0 ovs-vsctl[54564]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e5f17f07-bc92-4131-bf96-5df2839ca4b0\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 22 08:02:12 compute-0 ovs-ctl[54495]: Configuring Open vSwitch system IDs [  OK  ]
Nov 22 08:02:12 compute-0 ovs-ctl[54495]: Enabling remote OVSDB managers [  OK  ]
Nov 22 08:02:12 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 22 08:02:12 compute-0 ovs-vsctl[54570]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 08:02:12 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 22 08:02:12 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 22 08:02:12 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 22 08:02:12 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 22 08:02:12 compute-0 ovs-ctl[54615]: Inserting openvswitch module [  OK  ]
Nov 22 08:02:12 compute-0 ovs-ctl[54584]: Starting ovs-vswitchd [  OK  ]
Nov 22 08:02:12 compute-0 ovs-ctl[54584]: Enabling remote OVSDB managers [  OK  ]
Nov 22 08:02:12 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 22 08:02:12 compute-0 ovs-vsctl[54633]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 08:02:12 compute-0 systemd[1]: Starting Open vSwitch...
Nov 22 08:02:12 compute-0 systemd[1]: Finished Open vSwitch.
Nov 22 08:02:12 compute-0 sudo[54446]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:13 compute-0 python3.9[54784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:02:13 compute-0 sudo[54934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odnaswtplzvesugfgezhwrfxyrnefgcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798533.3209193-102-240454060996659/AnsiballZ_sefcontext.py'
Nov 22 08:02:13 compute-0 sudo[54934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:14 compute-0 python3.9[54936]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 08:02:15 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Nov 22 08:02:15 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 08:02:15 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 08:02:15 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 08:02:15 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 08:02:15 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 08:02:15 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 08:02:15 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 08:02:15 compute-0 sudo[54934]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:16 compute-0 python3.9[55091]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:02:17 compute-0 sudo[55247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyuvjbgijbscwvvipddgfcjrrsxrmjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798536.9525025-120-66531556746394/AnsiballZ_dnf.py'
Nov 22 08:02:17 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 22 08:02:17 compute-0 sudo[55247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:17 compute-0 python3.9[55249]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:02:18 compute-0 sudo[55247]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:19 compute-0 sudo[55400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kivaxeplovwkrelzasxhpccajcgkodpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798539.227019-128-198422919057821/AnsiballZ_command.py'
Nov 22 08:02:19 compute-0 sudo[55400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:19 compute-0 python3.9[55402]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:02:20 compute-0 sudo[55400]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:21 compute-0 sudo[55687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cymfeyhunedjokljevmvspoyjshkuzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798540.9758997-136-46781583859618/AnsiballZ_file.py'
Nov 22 08:02:21 compute-0 sudo[55687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:21 compute-0 python3.9[55689]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 08:02:21 compute-0 sudo[55687]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:22 compute-0 python3.9[55839]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:02:22 compute-0 sudo[55991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvcysnwwcnmjihrweybnukyyasnehgsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798542.6639132-152-37679456083334/AnsiballZ_dnf.py'
Nov 22 08:02:22 compute-0 sudo[55991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:23 compute-0 python3.9[55993]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:02:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 08:02:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 08:02:24 compute-0 systemd[1]: Reloading.
Nov 22 08:02:24 compute-0 systemd-sysv-generator[56036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:02:24 compute-0 systemd-rc-local-generator[56033]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:02:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 08:02:25 compute-0 sudo[55991]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:26 compute-0 sudo[56306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egjaajbkjujdsejxfcfecvkofwtxeson ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798545.7073307-160-269847720549968/AnsiballZ_systemd.py'
Nov 22 08:02:26 compute-0 sudo[56306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:26 compute-0 python3.9[56308]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:02:26 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 08:02:26 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 22 08:02:26 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 22 08:02:26 compute-0 systemd[1]: Stopping Network Manager...
Nov 22 08:02:26 compute-0 NetworkManager[7200]: <info>  [1763798546.4061] caught SIGTERM, shutting down normally.
Nov 22 08:02:26 compute-0 NetworkManager[7200]: <info>  [1763798546.4073] dhcp4 (eth0): canceled DHCP transaction
Nov 22 08:02:26 compute-0 NetworkManager[7200]: <info>  [1763798546.4073] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 08:02:26 compute-0 NetworkManager[7200]: <info>  [1763798546.4073] dhcp4 (eth0): state changed no lease
Nov 22 08:02:26 compute-0 NetworkManager[7200]: <info>  [1763798546.4075] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 08:02:26 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 08:02:26 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 08:02:26 compute-0 NetworkManager[7200]: <info>  [1763798546.8092] exiting (success)
Nov 22 08:02:26 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 08:02:26 compute-0 systemd[1]: Stopped Network Manager.
Nov 22 08:02:26 compute-0 systemd[1]: NetworkManager.service: Consumed 24.332s CPU time, 4.1M memory peak, read 0B from disk, written 15.0K to disk.
Nov 22 08:02:26 compute-0 systemd[1]: Starting Network Manager...
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.8657] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:a7489e2e-a622-4254-9a7e-02eae9fa3dfd)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.8658] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.8714] manager[0x563de9a1b090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 08:02:26 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 08:02:26 compute-0 systemd[1]: Started Hostname Service.
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9495] hostname: hostname: using hostnamed
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9495] hostname: static hostname changed from (none) to "compute-0"
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9499] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9503] manager[0x563de9a1b090]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9504] manager[0x563de9a1b090]: rfkill: WWAN hardware radio set enabled
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9522] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9529] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9530] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9530] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9531] manager: Networking is enabled by state file
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9533] settings: Loaded settings plugin: keyfile (internal)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9536] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9555] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9561] dhcp: init: Using DHCP client 'internal'
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9563] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9567] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9571] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9576] device (lo): Activation: starting connection 'lo' (d01cbcdc-cc87-4c04-b365-895d2218de25)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9581] device (eth0): carrier: link connected
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9584] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9587] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9588] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9591] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9596] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9600] device (eth1): carrier: link connected
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9603] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9607] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ba681640-7f4a-58d5-a224-a4a4f9cf13bc) (indicated)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9607] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9611] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9615] device (eth1): Activation: starting connection 'ci-private-network' (ba681640-7f4a-58d5-a224-a4a4f9cf13bc)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9620] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 08:02:26 compute-0 systemd[1]: Started Network Manager.
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9629] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9631] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9633] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9635] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9637] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9639] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9641] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9644] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9648] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9650] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9668] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9682] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9693] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9695] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9699] device (lo): Activation: successful, device activated.
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9703] dhcp4 (eth0): state changed new lease, address=38.129.56.85
Nov 22 08:02:26 compute-0 NetworkManager[56326]: <info>  [1763798546.9709] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 08:02:26 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 22 08:02:27 compute-0 sudo[56306]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0509] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0525] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0529] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0532] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0535] device (eth1): Activation: successful, device activated.
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0719] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0721] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0725] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0729] device (eth0): Activation: successful, device activated.
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0734] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 08:02:27 compute-0 NetworkManager[56326]: <info>  [1763798547.0736] manager: startup complete
Nov 22 08:02:27 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 22 08:02:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 08:02:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 08:02:27 compute-0 systemd[1]: run-r372ead200dc34e678c2f8fd0ce83b2fc.service: Deactivated successfully.
Nov 22 08:02:27 compute-0 sudo[56533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptixohwyiisjznuddeppteggigcqbyjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798547.1691122-168-126064899597826/AnsiballZ_dnf.py'
Nov 22 08:02:27 compute-0 sudo[56533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:27 compute-0 python3.9[56535]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:02:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 08:02:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 08:02:33 compute-0 systemd[1]: Reloading.
Nov 22 08:02:33 compute-0 systemd-sysv-generator[56589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:02:33 compute-0 systemd-rc-local-generator[56584]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:02:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 08:02:37 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 08:02:37 compute-0 sudo[56533]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 08:02:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 08:02:37 compute-0 systemd[1]: run-r849c6f9641774692889fd3bdd88df85e.service: Deactivated successfully.
Nov 22 08:02:38 compute-0 sudo[56992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzxoirupddagffiebalogqgzypmovcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798557.8080528-180-30009221254107/AnsiballZ_stat.py'
Nov 22 08:02:38 compute-0 sudo[56992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:38 compute-0 python3.9[56994]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:02:38 compute-0 sudo[56992]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:38 compute-0 sudo[57144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjdihdwezlcnnmmaeubigvqqaiqymibt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798558.4521072-189-250013236812394/AnsiballZ_ini_file.py'
Nov 22 08:02:38 compute-0 sudo[57144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:39 compute-0 python3.9[57146]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:39 compute-0 sudo[57144]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:39 compute-0 sudo[57298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpepccwidkujzaiokclkchhvrqueihkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798559.2825058-199-145761762328709/AnsiballZ_ini_file.py'
Nov 22 08:02:39 compute-0 sudo[57298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:39 compute-0 python3.9[57300]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:39 compute-0 sudo[57298]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:40 compute-0 sudo[57450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-treoleaqodshuvsxqgrvrgeobolmttie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798559.9523315-199-183918418855455/AnsiballZ_ini_file.py'
Nov 22 08:02:40 compute-0 sudo[57450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:40 compute-0 python3.9[57452]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:40 compute-0 sudo[57450]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:40 compute-0 sudo[57602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olbrplvzpwvcdidzzyubotnvgzyftagw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798560.5776722-214-158236565007350/AnsiballZ_ini_file.py'
Nov 22 08:02:40 compute-0 sudo[57602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:41 compute-0 python3.9[57604]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:41 compute-0 sudo[57602]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:41 compute-0 sudo[57754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecmqsdmqojclgnthnriogzeqvawgjwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798561.1590154-214-137609361954919/AnsiballZ_ini_file.py'
Nov 22 08:02:41 compute-0 sudo[57754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:41 compute-0 python3.9[57756]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:41 compute-0 sudo[57754]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:41 compute-0 sudo[57906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxbtajsqepmntpzoqrjjgxrtqkzugqss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798561.708441-229-98179616651226/AnsiballZ_stat.py'
Nov 22 08:02:41 compute-0 sudo[57906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:42 compute-0 python3.9[57908]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:02:42 compute-0 sudo[57906]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:42 compute-0 sudo[58029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlaqshshmnqemsegwnzdvtdwjjbkkyet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798561.708441-229-98179616651226/AnsiballZ_copy.py'
Nov 22 08:02:42 compute-0 sudo[58029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:42 compute-0 python3.9[58031]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798561.708441-229-98179616651226/.source _original_basename=._wjvvsom follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:42 compute-0 sudo[58029]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:43 compute-0 sudo[58181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufrmvpzhgzfgeffqnepsawksarpucqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798562.9525948-244-115565892903713/AnsiballZ_file.py'
Nov 22 08:02:43 compute-0 sudo[58181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:43 compute-0 python3.9[58183]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:43 compute-0 sudo[58181]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:44 compute-0 sudo[58333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mthnblemmczpusgusddwusxzhekrsjdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798563.5387986-252-103064136255454/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 22 08:02:44 compute-0 sudo[58333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:44 compute-0 python3.9[58335]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 22 08:02:44 compute-0 sudo[58333]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:44 compute-0 sudo[58485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfvudbtqnipbdkqshyamjmimzsjknpgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798564.4944422-261-57721304111955/AnsiballZ_file.py'
Nov 22 08:02:44 compute-0 sudo[58485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:44 compute-0 python3.9[58487]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:44 compute-0 sudo[58485]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:45 compute-0 sudo[58637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbziitywnsvzkvbkdiywdnechggzbmwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798565.2480464-271-203767862868243/AnsiballZ_stat.py'
Nov 22 08:02:45 compute-0 sudo[58637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:45 compute-0 sudo[58637]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:46 compute-0 sudo[58760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzyxatezbenvdverjxnkdxcrozqzqfji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798565.2480464-271-203767862868243/AnsiballZ_copy.py'
Nov 22 08:02:46 compute-0 sudo[58760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:46 compute-0 sudo[58760]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:46 compute-0 sudo[58912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nprwxjcbzomvfqkzlnmmhorzfpgiwdbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798566.3781085-286-10667965855337/AnsiballZ_slurp.py'
Nov 22 08:02:46 compute-0 sudo[58912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:46 compute-0 python3.9[58914]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 22 08:02:47 compute-0 sudo[58912]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:48 compute-0 sudo[59087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaeatxaqweiunoupbmmxdxikgakizpjc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798567.2414553-295-133960779636193/async_wrapper.py j985781813784 300 /home/zuul/.ansible/tmp/ansible-tmp-1763798567.2414553-295-133960779636193/AnsiballZ_edpm_os_net_config.py _'
Nov 22 08:02:48 compute-0 sudo[59087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:48 compute-0 ansible-async_wrapper.py[59089]: Invoked with j985781813784 300 /home/zuul/.ansible/tmp/ansible-tmp-1763798567.2414553-295-133960779636193/AnsiballZ_edpm_os_net_config.py _
Nov 22 08:02:48 compute-0 ansible-async_wrapper.py[59092]: Starting module and watcher
Nov 22 08:02:48 compute-0 ansible-async_wrapper.py[59092]: Start watching 59093 (300)
Nov 22 08:02:48 compute-0 ansible-async_wrapper.py[59093]: Start module (59093)
Nov 22 08:02:48 compute-0 ansible-async_wrapper.py[59089]: Return async_wrapper task started.
Nov 22 08:02:48 compute-0 sudo[59087]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:48 compute-0 python3.9[59094]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 22 08:02:49 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 22 08:02:49 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 22 08:02:49 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 22 08:02:49 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 22 08:02:49 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.1778] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.1796] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2346] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2347] audit: op="connection-add" uuid="bccd1112-c5bc-41b5-aa57-4feb5173c7fe" name="br-ex-br" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2365] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2367] audit: op="connection-add" uuid="06f9e33a-2356-48bd-b939-97927df87978" name="br-ex-port" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2379] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2381] audit: op="connection-add" uuid="87833549-96cf-4129-afa9-95dbc3d78587" name="eth1-port" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2393] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2394] audit: op="connection-add" uuid="cdf44555-0744-4f96-a65f-2851fc80922c" name="vlan20-port" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2406] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2407] audit: op="connection-add" uuid="80637fe3-0271-4565-9815-f6675dbb4dee" name="vlan21-port" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2418] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2420] audit: op="connection-add" uuid="068eda41-42ae-4e0c-8214-b237202d23bb" name="vlan22-port" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2439] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2456] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2458] audit: op="connection-add" uuid="8619714b-185f-4fbf-8961-847a314e363f" name="br-ex-if" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2512] audit: op="connection-update" uuid="ba681640-7f4a-58d5-a224-a4a4f9cf13bc" name="ci-private-network" args="ovs-external-ids.data,ipv4.never-default,ipv4.dns,ipv4.routing-rules,ipv4.routes,ipv4.addresses,ipv4.method,ipv6.routes,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ovs-interface.type,connection.slave-type,connection.controller,connection.master,connection.timestamp,connection.port-type" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2531] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2533] audit: op="connection-add" uuid="8f9f612e-2c45-46a6-b16d-38a647dbdddc" name="vlan20-if" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2549] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2550] audit: op="connection-add" uuid="641dd202-7150-4b22-8335-c925416a4083" name="vlan21-if" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2568] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2569] audit: op="connection-add" uuid="13cbe9af-7fc4-4f33-8b5d-c77509c1ca26" name="vlan22-if" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2583] audit: op="connection-delete" uuid="e2c878f8-511d-38c1-9152-001095563e31" name="Wired connection 1" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2594] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2603] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2607] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (bccd1112-c5bc-41b5-aa57-4feb5173c7fe)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2607] audit: op="connection-activate" uuid="bccd1112-c5bc-41b5-aa57-4feb5173c7fe" name="br-ex-br" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2609] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2615] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2619] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (06f9e33a-2356-48bd-b939-97927df87978)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2620] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2626] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2629] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (87833549-96cf-4129-afa9-95dbc3d78587)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2631] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2637] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2641] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (cdf44555-0744-4f96-a65f-2851fc80922c)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2642] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2648] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2652] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (80637fe3-0271-4565-9815-f6675dbb4dee)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2653] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2659] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2663] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (068eda41-42ae-4e0c-8214-b237202d23bb)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2663] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2665] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2667] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2671] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2674] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2676] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8619714b-185f-4fbf-8961-847a314e363f)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2677] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2680] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2681] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2681] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2682] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2690] device (eth1): disconnecting for new activation request.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2690] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2692] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2693] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2694] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2696] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2699] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2702] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8f9f612e-2c45-46a6-b16d-38a647dbdddc)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2702] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2704] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2705] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2706] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2707] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2710] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2713] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (641dd202-7150-4b22-8335-c925416a4083)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2714] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2716] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2717] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2718] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2719] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2724] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2727] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (13cbe9af-7fc4-4f33-8b5d-c77509c1ca26)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2727] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2730] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2731] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2731] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2732] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2742] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2743] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2745] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2746] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2751] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2754] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2757] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2758] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2759] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2763] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2767] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2769] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2770] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2773] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2777] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2781] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2783] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2787] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2791] dhcp4 (eth0): canceled DHCP transaction
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2791] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2792] dhcp4 (eth0): state changed no lease
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2793] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 22 08:02:50 compute-0 kernel: Timeout policy base is empty
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2806] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2812] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59095 uid=0 result="fail" reason="Device is not activated"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2818] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 22 08:02:50 compute-0 systemd-udevd[59100]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:02:50 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2867] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2873] dhcp4 (eth0): state changed new lease, address=38.129.56.85
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2878] device (eth1): disconnecting for new activation request.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2880] audit: op="connection-activate" uuid="ba681640-7f4a-58d5-a224-a4a4f9cf13bc" name="ci-private-network" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2931] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.2954] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59095 uid=0 result="success"
Nov 22 08:02:50 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3016] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3111] device (eth1): Activation: starting connection 'ci-private-network' (ba681640-7f4a-58d5-a224-a4a4f9cf13bc)
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3115] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3123] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3126] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3134] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3138] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3142] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3143] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3145] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3145] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3146] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3173] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3179] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3183] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3187] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3191] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3194] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3198] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3201] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3205] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3209] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3213] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3218] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3224] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 kernel: br-ex: entered promiscuous mode
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3277] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3283] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3288] device (eth1): Activation: successful, device activated.
Nov 22 08:02:50 compute-0 kernel: vlan22: entered promiscuous mode
Nov 22 08:02:50 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 22 08:02:50 compute-0 systemd-udevd[59099]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:02:50 compute-0 kernel: vlan20: entered promiscuous mode
Nov 22 08:02:50 compute-0 systemd-udevd[59188]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3430] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3441] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3455] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 kernel: vlan21: entered promiscuous mode
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3493] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3504] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3508] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3512] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3519] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3520] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3525] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3576] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3589] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3601] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3611] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3618] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3620] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3625] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3632] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3633] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 08:02:50 compute-0 NetworkManager[56326]: <info>  [1763798570.3638] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 08:02:51 compute-0 NetworkManager[56326]: <info>  [1763798571.4618] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59095 uid=0 result="success"
Nov 22 08:02:51 compute-0 NetworkManager[56326]: <info>  [1763798571.6445] checkpoint[0x563de99f2950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 22 08:02:51 compute-0 NetworkManager[56326]: <info>  [1763798571.6447] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59095 uid=0 result="success"
Nov 22 08:02:51 compute-0 sudo[59428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upahyajolowndolbtbxipbmmmtsbwnom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798571.362428-295-228357017204963/AnsiballZ_async_status.py'
Nov 22 08:02:51 compute-0 sudo[59428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:51 compute-0 NetworkManager[56326]: <info>  [1763798571.8912] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59095 uid=0 result="success"
Nov 22 08:02:51 compute-0 NetworkManager[56326]: <info>  [1763798571.8924] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59095 uid=0 result="success"
Nov 22 08:02:52 compute-0 python3.9[59430]: ansible-ansible.legacy.async_status Invoked with jid=j985781813784.59089 mode=status _async_dir=/root/.ansible_async
Nov 22 08:02:52 compute-0 sudo[59428]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:52 compute-0 NetworkManager[56326]: <info>  [1763798572.0611] audit: op="networking-control" arg="global-dns-configuration" pid=59095 uid=0 result="success"
Nov 22 08:02:52 compute-0 NetworkManager[56326]: <info>  [1763798572.0639] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 22 08:02:52 compute-0 NetworkManager[56326]: <info>  [1763798572.0662] audit: op="networking-control" arg="global-dns-configuration" pid=59095 uid=0 result="success"
Nov 22 08:02:52 compute-0 NetworkManager[56326]: <info>  [1763798572.0684] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59095 uid=0 result="success"
Nov 22 08:02:52 compute-0 NetworkManager[56326]: <info>  [1763798572.2123] checkpoint[0x563de99f2a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 22 08:02:52 compute-0 NetworkManager[56326]: <info>  [1763798572.2128] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59095 uid=0 result="success"
Nov 22 08:02:52 compute-0 ansible-async_wrapper.py[59093]: Module complete (59093)
Nov 22 08:02:53 compute-0 ansible-async_wrapper.py[59092]: Done in kid B.
Nov 22 08:02:55 compute-0 sudo[59532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imcfbltfggomczznwztuymawyynoacau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798571.362428-295-228357017204963/AnsiballZ_async_status.py'
Nov 22 08:02:55 compute-0 sudo[59532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:55 compute-0 python3.9[59534]: ansible-ansible.legacy.async_status Invoked with jid=j985781813784.59089 mode=status _async_dir=/root/.ansible_async
Nov 22 08:02:55 compute-0 sudo[59532]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:55 compute-0 sudo[59632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbrfzwawdgzbleftupumkaccrwftzmiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798571.362428-295-228357017204963/AnsiballZ_async_status.py'
Nov 22 08:02:55 compute-0 sudo[59632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:55 compute-0 python3.9[59634]: ansible-ansible.legacy.async_status Invoked with jid=j985781813784.59089 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 08:02:55 compute-0 sudo[59632]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:56 compute-0 sudo[59784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjwaybbuvcspcgqyzspiiqpgafbozfhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798576.1070695-322-272539559199300/AnsiballZ_stat.py'
Nov 22 08:02:56 compute-0 sudo[59784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:56 compute-0 python3.9[59786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:02:56 compute-0 sudo[59784]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:56 compute-0 sudo[59907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddvmuyzhuokykfnqnxgreayahnqtzznl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798576.1070695-322-272539559199300/AnsiballZ_copy.py'
Nov 22 08:02:56 compute-0 sudo[59907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:56 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 08:02:57 compute-0 python3.9[59909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798576.1070695-322-272539559199300/.source.returncode _original_basename=.3nf6h76m follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:57 compute-0 sudo[59907]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:57 compute-0 sudo[60061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hthciwmqckokxjbacbwvveygrzzuebom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798577.3048928-338-169500669248079/AnsiballZ_stat.py'
Nov 22 08:02:57 compute-0 sudo[60061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:57 compute-0 python3.9[60063]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:02:57 compute-0 sudo[60061]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:58 compute-0 sudo[60184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crfhdzbjhqosuyaqkoonyacrbqhtibay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798577.3048928-338-169500669248079/AnsiballZ_copy.py'
Nov 22 08:02:58 compute-0 sudo[60184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:58 compute-0 python3.9[60186]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798577.3048928-338-169500669248079/.source.cfg _original_basename=.9q0o456t follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:02:58 compute-0 sudo[60184]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:58 compute-0 sudo[60337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzfcfojdhwhehvfdbstcfhethmeihasb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798578.41279-353-203960966797581/AnsiballZ_systemd.py'
Nov 22 08:02:58 compute-0 sudo[60337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:02:59 compute-0 python3.9[60339]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:02:59 compute-0 systemd[1]: Reloading Network Manager...
Nov 22 08:02:59 compute-0 NetworkManager[56326]: <info>  [1763798579.1126] audit: op="reload" arg="0" pid=60343 uid=0 result="success"
Nov 22 08:02:59 compute-0 NetworkManager[56326]: <info>  [1763798579.1132] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 22 08:02:59 compute-0 systemd[1]: Reloaded Network Manager.
Nov 22 08:02:59 compute-0 sudo[60337]: pam_unix(sudo:session): session closed for user root
Nov 22 08:02:59 compute-0 sshd-session[52313]: Connection closed by 192.168.122.30 port 40708
Nov 22 08:02:59 compute-0 sshd-session[52310]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:02:59 compute-0 systemd-logind[826]: Session 12 logged out. Waiting for processes to exit.
Nov 22 08:02:59 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 22 08:02:59 compute-0 systemd[1]: session-12.scope: Consumed 48.480s CPU time.
Nov 22 08:02:59 compute-0 systemd-logind[826]: Removed session 12.
Nov 22 08:03:04 compute-0 sshd-session[60373]: Accepted publickey for zuul from 192.168.122.30 port 39258 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:03:04 compute-0 systemd-logind[826]: New session 13 of user zuul.
Nov 22 08:03:05 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 22 08:03:05 compute-0 sshd-session[60373]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:03:05 compute-0 python3.9[60527]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:03:06 compute-0 python3.9[60681]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:03:07 compute-0 python3.9[60870]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:03:08 compute-0 sshd-session[60376]: Connection closed by 192.168.122.30 port 39258
Nov 22 08:03:08 compute-0 sshd-session[60373]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:03:08 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 22 08:03:08 compute-0 systemd[1]: session-13.scope: Consumed 2.174s CPU time.
Nov 22 08:03:08 compute-0 systemd-logind[826]: Session 13 logged out. Waiting for processes to exit.
Nov 22 08:03:08 compute-0 systemd-logind[826]: Removed session 13.
Nov 22 08:03:09 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 08:03:14 compute-0 sshd-session[60899]: Accepted publickey for zuul from 192.168.122.30 port 34668 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:03:14 compute-0 systemd-logind[826]: New session 14 of user zuul.
Nov 22 08:03:14 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 22 08:03:14 compute-0 sshd-session[60899]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:03:15 compute-0 python3.9[61053]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:03:15 compute-0 python3.9[61207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:03:16 compute-0 sudo[61361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loqyittbhjkppuepuhenlfttsxgesidb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798596.3344707-40-179209914973721/AnsiballZ_setup.py'
Nov 22 08:03:16 compute-0 sudo[61361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:16 compute-0 python3.9[61363]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:03:17 compute-0 sudo[61361]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:17 compute-0 sudo[61445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwzsjtlqslymblngceouiiayiqixmbun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798596.3344707-40-179209914973721/AnsiballZ_dnf.py'
Nov 22 08:03:17 compute-0 sudo[61445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:17 compute-0 python3.9[61447]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:03:19 compute-0 sudo[61445]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:19 compute-0 sudo[61599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orzxfhlbslhzprnayyntejnlmwelbyus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798599.4128408-52-256336122426082/AnsiballZ_setup.py'
Nov 22 08:03:19 compute-0 sudo[61599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:20 compute-0 python3.9[61601]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:03:20 compute-0 sudo[61599]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:20 compute-0 sudo[61790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwgguufvnvbtfbfeviwkqwrsaoerbyns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798600.515202-63-258296831664063/AnsiballZ_file.py'
Nov 22 08:03:20 compute-0 sudo[61790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:21 compute-0 python3.9[61792]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:03:21 compute-0 sudo[61790]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:21 compute-0 sudo[61943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjlalmwgizrfbrjyqzqwehiwjgpprat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798601.29089-71-146057636717414/AnsiballZ_command.py'
Nov 22 08:03:21 compute-0 sudo[61943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:21 compute-0 python3.9[61945]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:03:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:03:22 compute-0 sudo[61943]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:22 compute-0 sudo[62106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flwmtkxjgxmibhyyqvgeavdbwvxxbzja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798602.2592003-79-176401685484976/AnsiballZ_stat.py'
Nov 22 08:03:22 compute-0 sudo[62106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:23 compute-0 python3.9[62108]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:03:23 compute-0 sudo[62106]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:23 compute-0 sudo[62184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wamlffbmqdonosuzclzvxmdvvyvjfmxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798602.2592003-79-176401685484976/AnsiballZ_file.py'
Nov 22 08:03:23 compute-0 sudo[62184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:23 compute-0 python3.9[62186]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:03:23 compute-0 sudo[62184]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:23 compute-0 sudo[62336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seedpodllrkfreftldebkyexambomiuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798603.6673155-91-108394275018734/AnsiballZ_stat.py'
Nov 22 08:03:23 compute-0 sudo[62336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:24 compute-0 python3.9[62338]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:03:24 compute-0 sudo[62336]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:24 compute-0 sudo[62414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpyrabhiizrcrmdfoatenurwlordzls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798603.6673155-91-108394275018734/AnsiballZ_file.py'
Nov 22 08:03:24 compute-0 sudo[62414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:24 compute-0 python3.9[62416]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:03:24 compute-0 sudo[62414]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:25 compute-0 sudo[62566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emqbaqfhdazzabfxfocqpfydsdyiedjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798604.9102433-104-136535063396403/AnsiballZ_ini_file.py'
Nov 22 08:03:25 compute-0 sudo[62566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:25 compute-0 python3.9[62568]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:03:25 compute-0 sudo[62566]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:26 compute-0 sudo[62718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcamkqjseitfasfpwgulhtqercthpnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798605.7764304-104-245468776234170/AnsiballZ_ini_file.py'
Nov 22 08:03:26 compute-0 sudo[62718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:26 compute-0 python3.9[62720]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:03:26 compute-0 sudo[62718]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:26 compute-0 sudo[62870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecskealpnktqdrjgizhxczilfwqiplbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798606.3234124-104-163726685254961/AnsiballZ_ini_file.py'
Nov 22 08:03:26 compute-0 sudo[62870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:26 compute-0 python3.9[62872]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:03:26 compute-0 sudo[62870]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:27 compute-0 sudo[63022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewcqfnmfskqkjzwxgtwmieszzmemcybp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798606.9386728-104-232818632728648/AnsiballZ_ini_file.py'
Nov 22 08:03:27 compute-0 sudo[63022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:27 compute-0 python3.9[63024]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:03:27 compute-0 sudo[63022]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:27 compute-0 sudo[63174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfusmvownntqicooqfdrpuiwomtgjrmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798607.6883316-135-168220257323080/AnsiballZ_dnf.py'
Nov 22 08:03:27 compute-0 sudo[63174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:28 compute-0 python3.9[63176]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:03:29 compute-0 sudo[63174]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:30 compute-0 sudo[63327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbrwwlbrivhzgkscuusrflmdhkxcixx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798610.0247939-146-157811491837342/AnsiballZ_setup.py'
Nov 22 08:03:30 compute-0 sudo[63327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:30 compute-0 python3.9[63329]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:03:30 compute-0 sudo[63327]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:31 compute-0 sudo[63481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woyajbbuxsqglbghorhhjcqmalvuxbmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798610.7695394-154-29575421986359/AnsiballZ_stat.py'
Nov 22 08:03:31 compute-0 sudo[63481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:31 compute-0 python3.9[63483]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:03:31 compute-0 sudo[63481]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:31 compute-0 sudo[63633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juitsnokvadyjqbfpetoexraapgzgyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798611.3998957-163-206157344507954/AnsiballZ_stat.py'
Nov 22 08:03:31 compute-0 sudo[63633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:31 compute-0 python3.9[63635]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:03:31 compute-0 sudo[63633]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:32 compute-0 sudo[63785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yckvlqqdjzzqqtaftebhsdldrddolvvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798612.0906668-173-209011029522278/AnsiballZ_command.py'
Nov 22 08:03:32 compute-0 sudo[63785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:32 compute-0 python3.9[63787]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:03:32 compute-0 sudo[63785]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:33 compute-0 sudo[63938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvnscrzknyhbazgyotnjzebsoocrmdbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798613.0013797-183-200004054810960/AnsiballZ_service_facts.py'
Nov 22 08:03:33 compute-0 sudo[63938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:33 compute-0 python3.9[63940]: ansible-service_facts Invoked
Nov 22 08:03:33 compute-0 network[63957]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:03:33 compute-0 network[63958]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:03:33 compute-0 network[63959]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:03:36 compute-0 sudo[63938]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:37 compute-0 sudo[64243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithgdxhjmvmqivkpkmqhwdpugtghjnpb ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763798616.9842124-198-75182571417062/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763798616.9842124-198-75182571417062/args'
Nov 22 08:03:37 compute-0 sudo[64243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:37 compute-0 sudo[64243]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:38 compute-0 sudo[64411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciruozwbricjedxexbtucsmtsanifweb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798617.8920388-209-218222335011201/AnsiballZ_dnf.py'
Nov 22 08:03:38 compute-0 sudo[64411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:38 compute-0 python3.9[64413]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:03:39 compute-0 sshd-session[64117]: Invalid user loginuser from 80.94.92.164 port 40156
Nov 22 08:03:39 compute-0 sshd-session[64117]: Connection closed by invalid user loginuser 80.94.92.164 port 40156 [preauth]
Nov 22 08:03:40 compute-0 sudo[64411]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:41 compute-0 sudo[64564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzwpdnkiwnztgawyypqgfqzzelnqurrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798620.616631-222-114420609121996/AnsiballZ_package_facts.py'
Nov 22 08:03:41 compute-0 sudo[64564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:41 compute-0 python3.9[64566]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 08:03:41 compute-0 sudo[64564]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:42 compute-0 sudo[64716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exdltyiubbxvwydpeozpwzfkirhcvhyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798622.2009737-232-120352955618070/AnsiballZ_stat.py'
Nov 22 08:03:42 compute-0 sudo[64716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:42 compute-0 python3.9[64718]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:03:42 compute-0 sudo[64716]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:43 compute-0 sudo[64841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcciuastkkudpmyxsyfitigzckmkbunn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798622.2009737-232-120352955618070/AnsiballZ_copy.py'
Nov 22 08:03:43 compute-0 sudo[64841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:43 compute-0 python3.9[64843]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798622.2009737-232-120352955618070/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:03:43 compute-0 sudo[64841]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:44 compute-0 sudo[64995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orsfzjficshrrmyqthjymubwbquoqixc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798623.7873085-247-5845910634410/AnsiballZ_stat.py'
Nov 22 08:03:44 compute-0 sudo[64995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:44 compute-0 python3.9[64997]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:03:44 compute-0 sudo[64995]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:44 compute-0 sudo[65120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tycnraprvmupdmzvpmsfahmqlviichgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798623.7873085-247-5845910634410/AnsiballZ_copy.py'
Nov 22 08:03:44 compute-0 sudo[65120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:45 compute-0 python3.9[65122]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798623.7873085-247-5845910634410/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:03:45 compute-0 sudo[65120]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:45 compute-0 sudo[65274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdlsnzvulwrmxwhpvsdbdsrluzbehcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798625.516468-268-122997593533538/AnsiballZ_lineinfile.py'
Nov 22 08:03:45 compute-0 sudo[65274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:46 compute-0 python3.9[65276]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:03:46 compute-0 sudo[65274]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:46 compute-0 sudo[65428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwvfvrfppmxlidxvqnssjnubvrqbjbdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798626.6811445-283-199844890749072/AnsiballZ_setup.py'
Nov 22 08:03:46 compute-0 sudo[65428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:47 compute-0 python3.9[65430]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:03:47 compute-0 sudo[65428]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:48 compute-0 sudo[65512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swaldclooxereicjebovexlrtqraxjqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798626.6811445-283-199844890749072/AnsiballZ_systemd.py'
Nov 22 08:03:48 compute-0 sudo[65512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:48 compute-0 python3.9[65514]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:03:48 compute-0 sudo[65512]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:49 compute-0 sudo[65666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbhidpvhwhbxrewjrplwsdapjbnjnnmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798629.3260722-299-49730220451672/AnsiballZ_setup.py'
Nov 22 08:03:49 compute-0 sudo[65666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:49 compute-0 python3.9[65668]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:03:50 compute-0 sudo[65666]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:50 compute-0 sudo[65750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbflueuauymiirzzrvznmszcfirzustu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798629.3260722-299-49730220451672/AnsiballZ_systemd.py'
Nov 22 08:03:50 compute-0 sudo[65750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:50 compute-0 python3.9[65752]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:03:50 compute-0 chronyd[835]: chronyd exiting
Nov 22 08:03:50 compute-0 systemd[1]: Stopping NTP client/server...
Nov 22 08:03:50 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 22 08:03:50 compute-0 systemd[1]: Stopped NTP client/server.
Nov 22 08:03:50 compute-0 systemd[1]: Starting NTP client/server...
Nov 22 08:03:50 compute-0 chronyd[65760]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 08:03:50 compute-0 chronyd[65760]: Frequency -26.245 +/- 0.221 ppm read from /var/lib/chrony/drift
Nov 22 08:03:50 compute-0 chronyd[65760]: Loaded seccomp filter (level 2)
Nov 22 08:03:50 compute-0 systemd[1]: Started NTP client/server.
Nov 22 08:03:50 compute-0 sudo[65750]: pam_unix(sudo:session): session closed for user root
Nov 22 08:03:51 compute-0 sshd-session[60902]: Connection closed by 192.168.122.30 port 34668
Nov 22 08:03:51 compute-0 sshd-session[60899]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:03:51 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 22 08:03:51 compute-0 systemd[1]: session-14.scope: Consumed 24.694s CPU time.
Nov 22 08:03:51 compute-0 systemd-logind[826]: Session 14 logged out. Waiting for processes to exit.
Nov 22 08:03:51 compute-0 systemd-logind[826]: Removed session 14.
Nov 22 08:03:57 compute-0 sshd-session[65786]: Accepted publickey for zuul from 192.168.122.30 port 45852 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:03:57 compute-0 systemd-logind[826]: New session 15 of user zuul.
Nov 22 08:03:57 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 22 08:03:57 compute-0 sshd-session[65786]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:03:58 compute-0 python3.9[65939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:03:59 compute-0 sudo[66093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liznzhoaqpnfrwdbojrggicuoojxaenh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798638.935247-33-12880496236571/AnsiballZ_file.py'
Nov 22 08:03:59 compute-0 sudo[66093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:03:59 compute-0 python3.9[66095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:03:59 compute-0 sudo[66093]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:00 compute-0 sudo[66268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdiarzjlpavybcunhwbmbvcvtuwngnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798639.7793825-41-145436571471428/AnsiballZ_stat.py'
Nov 22 08:04:00 compute-0 sudo[66268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:00 compute-0 python3.9[66270]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:00 compute-0 sudo[66268]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:00 compute-0 sudo[66346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdaxvpvaugwwthayzpibefimszkldthc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798639.7793825-41-145436571471428/AnsiballZ_file.py'
Nov 22 08:04:00 compute-0 sudo[66346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:00 compute-0 python3.9[66348]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.u8k912ac recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:00 compute-0 sudo[66346]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:01 compute-0 sudo[66498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqlxyufqczacuzzzxyduslwzkjioodyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798641.209869-61-47168390642210/AnsiballZ_stat.py'
Nov 22 08:04:01 compute-0 sudo[66498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:01 compute-0 python3.9[66500]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:01 compute-0 sudo[66498]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:02 compute-0 sudo[66621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhiahjhvozvqvpxgqeeveemfwwqwdjzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798641.209869-61-47168390642210/AnsiballZ_copy.py'
Nov 22 08:04:02 compute-0 sudo[66621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:02 compute-0 python3.9[66623]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798641.209869-61-47168390642210/.source _original_basename=.wb3tqxhk follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:02 compute-0 sudo[66621]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:02 compute-0 sudo[66773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdkvrwepyinmstzoavkjgtkwclbdclgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798642.5543313-77-171694484704098/AnsiballZ_file.py'
Nov 22 08:04:02 compute-0 sudo[66773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:03 compute-0 python3.9[66775]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:04:03 compute-0 sudo[66773]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:03 compute-0 sudo[66925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knnauvsqwhycjbsrpqlzvhnmqmawgyte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798643.1861727-85-55222511247423/AnsiballZ_stat.py'
Nov 22 08:04:03 compute-0 sudo[66925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:03 compute-0 python3.9[66927]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:03 compute-0 sudo[66925]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:04 compute-0 sudo[67048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxwickbwcgpanyctlnfekwxokxikfef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798643.1861727-85-55222511247423/AnsiballZ_copy.py'
Nov 22 08:04:04 compute-0 sudo[67048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:04 compute-0 python3.9[67050]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798643.1861727-85-55222511247423/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:04:04 compute-0 sudo[67048]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:04 compute-0 sudo[67200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmglqvwqzevucksszokucytgjlibrnkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798644.458809-85-105923852296573/AnsiballZ_stat.py'
Nov 22 08:04:04 compute-0 sudo[67200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:04 compute-0 python3.9[67202]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:04 compute-0 sudo[67200]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:05 compute-0 sudo[67323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwqfxwjgfffswhgzouprvlzlecjuzbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798644.458809-85-105923852296573/AnsiballZ_copy.py'
Nov 22 08:04:05 compute-0 sudo[67323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:05 compute-0 python3.9[67325]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798644.458809-85-105923852296573/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:04:05 compute-0 sudo[67323]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:05 compute-0 sudo[67475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvntvokpudhfbotjwmkdxwunahfyqvch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798645.644146-114-274913089035960/AnsiballZ_file.py'
Nov 22 08:04:05 compute-0 sudo[67475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:06 compute-0 python3.9[67477]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:06 compute-0 sudo[67475]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:06 compute-0 sudo[67627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oapmlialwnxawvdyiugijmrdnqfipsxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798646.2593346-122-196674677586271/AnsiballZ_stat.py'
Nov 22 08:04:06 compute-0 sudo[67627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:06 compute-0 python3.9[67629]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:06 compute-0 sudo[67627]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:07 compute-0 sudo[67750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgwuubwpafhrihmalrbidpwrrqgvyysb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798646.2593346-122-196674677586271/AnsiballZ_copy.py'
Nov 22 08:04:07 compute-0 sudo[67750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:07 compute-0 python3.9[67752]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798646.2593346-122-196674677586271/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:07 compute-0 sudo[67750]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:08 compute-0 sudo[67902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvomdsrqvzspwsvaiezsgfmuzofitkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798647.5628812-137-179193886750074/AnsiballZ_stat.py'
Nov 22 08:04:08 compute-0 sudo[67902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:08 compute-0 python3.9[67904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:08 compute-0 sudo[67902]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:08 compute-0 sudo[68025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xblpnszfqsssjcpyzhlazalmzjtturjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798647.5628812-137-179193886750074/AnsiballZ_copy.py'
Nov 22 08:04:08 compute-0 sudo[68025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:08 compute-0 python3.9[68027]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798647.5628812-137-179193886750074/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:08 compute-0 sudo[68025]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:09 compute-0 sudo[68177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yusbydwuhyxlvwdxnztnusdinsweedsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798648.9454072-152-211817129603325/AnsiballZ_systemd.py'
Nov 22 08:04:09 compute-0 sudo[68177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:09 compute-0 python3.9[68179]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:04:09 compute-0 systemd[1]: Reloading.
Nov 22 08:04:09 compute-0 systemd-rc-local-generator[68197]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:04:09 compute-0 systemd-sysv-generator[68203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:04:10 compute-0 systemd[1]: Reloading.
Nov 22 08:04:10 compute-0 systemd-rc-local-generator[68241]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:04:10 compute-0 systemd-sysv-generator[68245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:04:10 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 22 08:04:10 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 22 08:04:10 compute-0 sudo[68177]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:10 compute-0 sudo[68403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jebzpvvgfivltnniqwplpanpmbdmwbus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798650.500336-160-280552344613941/AnsiballZ_stat.py'
Nov 22 08:04:10 compute-0 sudo[68403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:10 compute-0 python3.9[68405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:10 compute-0 sudo[68403]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:11 compute-0 sudo[68526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkjbqfnlgacxntmaxybokykgyyiqwxhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798650.500336-160-280552344613941/AnsiballZ_copy.py'
Nov 22 08:04:11 compute-0 sudo[68526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:11 compute-0 python3.9[68528]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798650.500336-160-280552344613941/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:11 compute-0 sudo[68526]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:11 compute-0 sudo[68678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysdeienwifsyobsqbuhbmjtxsujvqwed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798651.6190233-175-228294726953703/AnsiballZ_stat.py'
Nov 22 08:04:11 compute-0 sudo[68678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:12 compute-0 python3.9[68680]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:12 compute-0 sudo[68678]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:12 compute-0 sudo[68801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkcblqjtqdasjvzamjkqdcythtxwrzwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798651.6190233-175-228294726953703/AnsiballZ_copy.py'
Nov 22 08:04:12 compute-0 sudo[68801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:12 compute-0 python3.9[68803]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798651.6190233-175-228294726953703/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:12 compute-0 sudo[68801]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:13 compute-0 sudo[68953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnmicvwrbwprfifdirxigqjgsfmwlhoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798652.7925215-190-235583515361962/AnsiballZ_systemd.py'
Nov 22 08:04:13 compute-0 sudo[68953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:13 compute-0 python3.9[68955]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:04:13 compute-0 systemd[1]: Reloading.
Nov 22 08:04:13 compute-0 systemd-rc-local-generator[68984]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:04:13 compute-0 systemd-sysv-generator[68988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:04:13 compute-0 systemd[1]: Reloading.
Nov 22 08:04:13 compute-0 systemd-rc-local-generator[69018]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:04:13 compute-0 systemd-sysv-generator[69023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:04:13 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 08:04:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 08:04:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 08:04:13 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 08:04:13 compute-0 sudo[68953]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:14 compute-0 python3.9[69182]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:04:14 compute-0 network[69199]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:04:14 compute-0 network[69200]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:04:14 compute-0 network[69201]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:04:17 compute-0 sudo[69461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwmfnlopoaibvdnrxxkycmhimunxkuyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798657.5004144-206-195679945467713/AnsiballZ_systemd.py'
Nov 22 08:04:17 compute-0 sudo[69461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:18 compute-0 python3.9[69463]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:04:18 compute-0 systemd[1]: Reloading.
Nov 22 08:04:18 compute-0 systemd-rc-local-generator[69492]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:04:18 compute-0 systemd-sysv-generator[69496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:04:18 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 22 08:04:18 compute-0 iptables.init[69504]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 22 08:04:18 compute-0 iptables.init[69504]: iptables: Flushing firewall rules: [  OK  ]
Nov 22 08:04:18 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 22 08:04:18 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 22 08:04:18 compute-0 sudo[69461]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:19 compute-0 sudo[69698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmwplhsxenagoneqyljqwvpxzzcyxtrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798658.8869426-206-112803378896881/AnsiballZ_systemd.py'
Nov 22 08:04:19 compute-0 sudo[69698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:19 compute-0 python3.9[69700]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:04:19 compute-0 sudo[69698]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:20 compute-0 sudo[69852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqutigjusapuchybpksvqebubkyflpey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798659.7452126-222-6641074191447/AnsiballZ_systemd.py'
Nov 22 08:04:20 compute-0 sudo[69852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:20 compute-0 python3.9[69854]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:04:20 compute-0 systemd[1]: Reloading.
Nov 22 08:04:20 compute-0 systemd-rc-local-generator[69883]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:04:20 compute-0 systemd-sysv-generator[69886]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:04:20 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 22 08:04:20 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 22 08:04:20 compute-0 sudo[69852]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:21 compute-0 sudo[70044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzzxmwczjkneletswggieffuoinadmel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798660.8370874-230-182153314314644/AnsiballZ_command.py'
Nov 22 08:04:21 compute-0 sudo[70044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:21 compute-0 python3.9[70046]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:04:21 compute-0 sudo[70044]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:22 compute-0 sudo[70197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbsxukghwteecvfxxonvcyyjjzjdwcgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798661.8180678-244-95249071282374/AnsiballZ_stat.py'
Nov 22 08:04:22 compute-0 sudo[70197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:22 compute-0 python3.9[70199]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:22 compute-0 sudo[70197]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:22 compute-0 sudo[70322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagyqyjvfyagugdowpcwsmrgdlpmuvhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798661.8180678-244-95249071282374/AnsiballZ_copy.py'
Nov 22 08:04:22 compute-0 sudo[70322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:23 compute-0 python3.9[70324]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798661.8180678-244-95249071282374/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:23 compute-0 sudo[70322]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:23 compute-0 sudo[70475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qocqrpwgclhwvivmcwiecyokpcvnytii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798663.241822-259-66979598587209/AnsiballZ_systemd.py'
Nov 22 08:04:23 compute-0 sudo[70475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:23 compute-0 python3.9[70477]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:04:23 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 22 08:04:23 compute-0 sshd[1014]: Received SIGHUP; restarting.
Nov 22 08:04:23 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 22 08:04:23 compute-0 sshd[1014]: Server listening on 0.0.0.0 port 22.
Nov 22 08:04:23 compute-0 sshd[1014]: Server listening on :: port 22.
Nov 22 08:04:23 compute-0 sudo[70475]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:24 compute-0 sudo[70631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgacaupxrnmruuusfbmysarxmebxremi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798664.018265-267-256946826621157/AnsiballZ_file.py'
Nov 22 08:04:24 compute-0 sudo[70631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:24 compute-0 python3.9[70633]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:24 compute-0 sudo[70631]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:24 compute-0 sudo[70783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svbnzvfwkbljeonbhgylnwtuerzhcoml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798664.6141121-275-237486028964199/AnsiballZ_stat.py'
Nov 22 08:04:24 compute-0 sudo[70783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:25 compute-0 python3.9[70785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:25 compute-0 sudo[70783]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:25 compute-0 sudo[70906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riduvvkfrqkvoqvtemgikkekhihrrnwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798664.6141121-275-237486028964199/AnsiballZ_copy.py'
Nov 22 08:04:25 compute-0 sudo[70906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:25 compute-0 python3.9[70908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798664.6141121-275-237486028964199/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:25 compute-0 sudo[70906]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:26 compute-0 sudo[71058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmuhfntjvgrsdeumudznytghtinfmqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798665.8487267-293-273811716278858/AnsiballZ_timezone.py'
Nov 22 08:04:26 compute-0 sudo[71058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:26 compute-0 python3.9[71060]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 08:04:26 compute-0 systemd[1]: Starting Time & Date Service...
Nov 22 08:04:26 compute-0 systemd[1]: Started Time & Date Service.
Nov 22 08:04:26 compute-0 sudo[71058]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:27 compute-0 sudo[71214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wazeurgumutfpplmgeugqtzmhtyyrcgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798666.898295-302-67176382982512/AnsiballZ_file.py'
Nov 22 08:04:27 compute-0 sudo[71214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:27 compute-0 python3.9[71216]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:27 compute-0 sudo[71214]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:27 compute-0 sudo[71366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cznamclwtyuyhgesrtviefxmtbplzetw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798667.534226-310-220146684349973/AnsiballZ_stat.py'
Nov 22 08:04:27 compute-0 sudo[71366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:27 compute-0 python3.9[71368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:28 compute-0 sudo[71366]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:28 compute-0 sudo[71489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygkmzecobkrqzonaayvkrjntnzyjflfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798667.534226-310-220146684349973/AnsiballZ_copy.py'
Nov 22 08:04:28 compute-0 sudo[71489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:28 compute-0 python3.9[71491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798667.534226-310-220146684349973/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:28 compute-0 sudo[71489]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:28 compute-0 sudo[71641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odxtfwvrigxjiomwkcefwkengxrmjbyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798668.6938381-325-57672167610924/AnsiballZ_stat.py'
Nov 22 08:04:28 compute-0 sudo[71641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:29 compute-0 python3.9[71643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:29 compute-0 sudo[71641]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:29 compute-0 sudo[71764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlxyxlhvrcfkvahyupvfsqignwpxatuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798668.6938381-325-57672167610924/AnsiballZ_copy.py'
Nov 22 08:04:29 compute-0 sudo[71764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:29 compute-0 python3.9[71766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798668.6938381-325-57672167610924/.source.yaml _original_basename=.1edp1out follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:29 compute-0 sudo[71764]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:30 compute-0 sudo[71916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejvxmpvuebgrhdyahwsxuymmutelonpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798669.852133-340-274832644644146/AnsiballZ_stat.py'
Nov 22 08:04:30 compute-0 sudo[71916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:30 compute-0 python3.9[71918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:30 compute-0 sudo[71916]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:30 compute-0 sudo[72039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkgooeuutarhtcuwfsiamdmjekgvpiio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798669.852133-340-274832644644146/AnsiballZ_copy.py'
Nov 22 08:04:30 compute-0 sudo[72039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:30 compute-0 python3.9[72041]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798669.852133-340-274832644644146/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:30 compute-0 sudo[72039]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:31 compute-0 sudo[72191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfanhhtwdtvsxfrbpxotbocdanikydrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798671.0476384-355-145172861513979/AnsiballZ_command.py'
Nov 22 08:04:31 compute-0 sudo[72191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:31 compute-0 python3.9[72193]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:04:31 compute-0 sudo[72191]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:31 compute-0 sudo[72344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqrvkirfxuraoarfaiuudijytvkhjnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798671.631691-363-40662125367598/AnsiballZ_command.py'
Nov 22 08:04:31 compute-0 sudo[72344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:32 compute-0 python3.9[72346]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:04:32 compute-0 sudo[72344]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:32 compute-0 sudo[72497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvnohkjsloxbgsehxpufilfoeaqxehdn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763798672.2393267-371-84339007845057/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 08:04:32 compute-0 sudo[72497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:32 compute-0 python3[72499]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 08:04:32 compute-0 sudo[72497]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:33 compute-0 sudo[72649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-befxhcgetlssztvwdurdsvglepdpyijj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798672.992426-379-148876576384445/AnsiballZ_stat.py'
Nov 22 08:04:33 compute-0 sudo[72649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:33 compute-0 python3.9[72651]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:33 compute-0 sudo[72649]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:33 compute-0 sudo[72772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvsxtjgdgkuggeotcptkolybmmxtppuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798672.992426-379-148876576384445/AnsiballZ_copy.py'
Nov 22 08:04:33 compute-0 sudo[72772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:33 compute-0 python3.9[72774]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798672.992426-379-148876576384445/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:33 compute-0 sudo[72772]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:34 compute-0 sudo[72924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blvxtuneyfpidcervhkzsbmyvieuleym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798674.0756185-394-90697656936911/AnsiballZ_stat.py'
Nov 22 08:04:34 compute-0 sudo[72924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:34 compute-0 python3.9[72926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:34 compute-0 sudo[72924]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:34 compute-0 sudo[73047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fojqpxeuknzduxzptvfjlchwcghhzkgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798674.0756185-394-90697656936911/AnsiballZ_copy.py'
Nov 22 08:04:34 compute-0 sudo[73047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:34 compute-0 python3.9[73049]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798674.0756185-394-90697656936911/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:35 compute-0 sudo[73047]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:35 compute-0 sudo[73199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oltsefpjbewjabagsxajsuljemwbkghn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798675.2571588-409-181727139058412/AnsiballZ_stat.py'
Nov 22 08:04:35 compute-0 sudo[73199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:35 compute-0 python3.9[73201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:35 compute-0 sudo[73199]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:35 compute-0 sudo[73322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suklgrblpgoktmsrqmyjcwratjehbvhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798675.2571588-409-181727139058412/AnsiballZ_copy.py'
Nov 22 08:04:35 compute-0 sudo[73322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:36 compute-0 python3.9[73324]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798675.2571588-409-181727139058412/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:36 compute-0 sudo[73322]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:36 compute-0 sudo[73474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-micoiiefukanopalxwarleodmnqzlztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798676.339172-424-193586081382897/AnsiballZ_stat.py'
Nov 22 08:04:36 compute-0 sudo[73474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:36 compute-0 python3.9[73476]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:36 compute-0 sudo[73474]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:37 compute-0 sudo[73597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvpqzhmegcueyazuiscoavwlbjrzpvyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798676.339172-424-193586081382897/AnsiballZ_copy.py'
Nov 22 08:04:37 compute-0 sudo[73597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:37 compute-0 python3.9[73599]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798676.339172-424-193586081382897/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:37 compute-0 sudo[73597]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:37 compute-0 sudo[73749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojftrrucgzjiddafsefsajjatfelmhju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798677.4868433-439-131362137837441/AnsiballZ_stat.py'
Nov 22 08:04:37 compute-0 sudo[73749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:38 compute-0 python3.9[73751]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:04:38 compute-0 sudo[73749]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:38 compute-0 sudo[73872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvtfawuyavhpqywjfollezjbnfiysxpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798677.4868433-439-131362137837441/AnsiballZ_copy.py'
Nov 22 08:04:38 compute-0 sudo[73872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:38 compute-0 python3.9[73874]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798677.4868433-439-131362137837441/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:38 compute-0 sudo[73872]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:39 compute-0 sudo[74024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vevvureuaplbvqdgpeydlgagijxvfang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798678.7441623-454-27764160405026/AnsiballZ_file.py'
Nov 22 08:04:39 compute-0 sudo[74024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:39 compute-0 python3.9[74026]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:39 compute-0 sudo[74024]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:39 compute-0 sudo[74176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufsyshimpoxcsockhzbwgkngwjzsuwfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798679.3839948-462-139335178836552/AnsiballZ_command.py'
Nov 22 08:04:39 compute-0 sudo[74176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:39 compute-0 python3.9[74178]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:04:39 compute-0 sudo[74176]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:40 compute-0 sudo[74335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygkjyovfxvmywhcimkepusslekjzkjkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798680.0462806-470-83396071697779/AnsiballZ_blockinfile.py'
Nov 22 08:04:40 compute-0 sudo[74335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:40 compute-0 python3.9[74337]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:41 compute-0 sudo[74335]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:41 compute-0 sudo[74488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mysofcgysfncyjlwuwsnkmrrkokzstnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798681.2352576-479-144048796488053/AnsiballZ_file.py'
Nov 22 08:04:41 compute-0 sudo[74488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:41 compute-0 python3.9[74490]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:41 compute-0 sudo[74488]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:42 compute-0 sudo[74640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkucdcnxrxoomttctqxxwwsacuczpucm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798681.8201554-479-218336127253327/AnsiballZ_file.py'
Nov 22 08:04:42 compute-0 sudo[74640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:42 compute-0 python3.9[74642]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:42 compute-0 sudo[74640]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:42 compute-0 sudo[74792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyljfrgexroewzxkzynkwzvwsorqpdmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798682.4662118-494-28566903378066/AnsiballZ_mount.py'
Nov 22 08:04:42 compute-0 sudo[74792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:43 compute-0 python3.9[74794]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 08:04:43 compute-0 sudo[74792]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:43 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:04:43 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:04:43 compute-0 sudo[74946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fedufigzpwxjjmjlzhmcjeoaelmusytp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798683.3285885-494-68562086096958/AnsiballZ_mount.py'
Nov 22 08:04:43 compute-0 sudo[74946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:43 compute-0 python3.9[74948]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 08:04:43 compute-0 sudo[74946]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:44 compute-0 sshd-session[65789]: Connection closed by 192.168.122.30 port 45852
Nov 22 08:04:44 compute-0 sshd-session[65786]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:04:44 compute-0 systemd-logind[826]: Session 15 logged out. Waiting for processes to exit.
Nov 22 08:04:44 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 22 08:04:44 compute-0 systemd[1]: session-15.scope: Consumed 33.376s CPU time.
Nov 22 08:04:44 compute-0 systemd-logind[826]: Removed session 15.
Nov 22 08:04:49 compute-0 sshd-session[74974]: Accepted publickey for zuul from 192.168.122.30 port 53480 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:04:49 compute-0 systemd-logind[826]: New session 16 of user zuul.
Nov 22 08:04:49 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 22 08:04:49 compute-0 sshd-session[74974]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:04:50 compute-0 sudo[75127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uabidydhyxtjmfhcdrehkhhxymwkyfln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798689.5663674-16-163487351092934/AnsiballZ_tempfile.py'
Nov 22 08:04:50 compute-0 sudo[75127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:50 compute-0 python3.9[75129]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 08:04:50 compute-0 sudo[75127]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:50 compute-0 sudo[75279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atzdoytfgougxvpurpbkibqxutpjgvdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798690.3393612-28-270552897640824/AnsiballZ_stat.py'
Nov 22 08:04:50 compute-0 sudo[75279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:50 compute-0 python3.9[75281]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:04:50 compute-0 sudo[75279]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:51 compute-0 sudo[75431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kshhxhupsagvhgaifdulxunjobskvzem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798691.1057343-38-84966422473505/AnsiballZ_setup.py'
Nov 22 08:04:51 compute-0 sudo[75431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:51 compute-0 python3.9[75433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:04:51 compute-0 sudo[75431]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:52 compute-0 sudo[75583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beasnzjyvgqqifxcsbqmrmttwzfbhgcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798692.1452909-47-227701811312407/AnsiballZ_blockinfile.py'
Nov 22 08:04:52 compute-0 sudo[75583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:52 compute-0 python3.9[75585]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEfniMEGoAX9jQphv2MKgjsT01Y/qu7UmPst6wp215MSZ1dqduTzuRDVBm7QKEL475yAUnl39e78aYAPWQyKYyi8+lXd80bskxg2PZIz/IxYua2yiqvzKCbeVTj8GsBScU8AJlPq/+DnPKUPauj5/DZz3UmSIY2sneayzZSkTc70A1AhzaQl9SLYHDg5RmcE4ypwe+DemCH0a/K3ZEbQIuZsGV/lt0/0nXFQa+oHsuENwWfhCUJim9XmDn42zy3Bi1j6/Y0uzKDZmJIL9yllfuGceg+YFtCtMyFvrf/mBLHc+zf/VHzBmaixY9oAYnOw8Jpior3fqQP67Hoahdrz2jQ7TIAUrV9oszpQdSW3FhO+A3lblRm5O2VDvTw6/zRZIXLBNM/6Cj3cOMxyIuZBDCPwzJP4V6IndYmp82nEsXfsJXAa3rKbhIEzXY6gC663tcgcgKmrlpOCaS/hmUc8CdIM/AGGQBoeygQMzlfa62g3k6GKRH+v0TSwa4C2gNEW8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPlXFJ2FxmXFF4ZKzGXzoXnn/yTchwIY02z1wBy+/jvm
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF8RNEiBrDaWhFpKoMHzRCPaHOR96vQndjBGS0+t5XddvfFu9UTWvKtEko0k9SU3qE9tCb+IpRLrCCM+R8GVjLM=
                                             create=True mode=0644 path=/tmp/ansible.8f5bfgxg state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:52 compute-0 sudo[75583]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:53 compute-0 sudo[75735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxeksawtlzgbczobpyhqcjmmlvahdvqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798692.8619628-55-63858754858774/AnsiballZ_command.py'
Nov 22 08:04:53 compute-0 sudo[75735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:53 compute-0 python3.9[75737]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8f5bfgxg' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:04:53 compute-0 sudo[75735]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:54 compute-0 sudo[75889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmhugfrcajyqnvueanytcadowsuabuhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798693.6148183-63-5205695422281/AnsiballZ_file.py'
Nov 22 08:04:54 compute-0 sudo[75889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:04:54 compute-0 python3.9[75891]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8f5bfgxg state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:04:54 compute-0 sudo[75889]: pam_unix(sudo:session): session closed for user root
Nov 22 08:04:54 compute-0 sshd-session[74977]: Connection closed by 192.168.122.30 port 53480
Nov 22 08:04:54 compute-0 sshd-session[74974]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:04:54 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 22 08:04:54 compute-0 systemd[1]: session-16.scope: Consumed 3.088s CPU time.
Nov 22 08:04:54 compute-0 systemd-logind[826]: Session 16 logged out. Waiting for processes to exit.
Nov 22 08:04:54 compute-0 systemd-logind[826]: Removed session 16.
Nov 22 08:04:56 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 08:04:59 compute-0 sshd-session[75918]: Accepted publickey for zuul from 192.168.122.30 port 57474 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:04:59 compute-0 systemd-logind[826]: New session 17 of user zuul.
Nov 22 08:05:00 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 22 08:05:00 compute-0 sshd-session[75918]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:05:01 compute-0 python3.9[76071]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:05:01 compute-0 sudo[76225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvhliuyaijixqowbslgpjegzzdkekpfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798701.3850555-32-187236258254789/AnsiballZ_systemd.py'
Nov 22 08:05:01 compute-0 sudo[76225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:02 compute-0 python3.9[76227]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 08:05:02 compute-0 sudo[76225]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:02 compute-0 sudo[76379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmwclypsrdjofhueixixhgnnrguoaxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798702.6399772-40-129749420789177/AnsiballZ_systemd.py'
Nov 22 08:05:02 compute-0 sudo[76379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:03 compute-0 python3.9[76381]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:05:03 compute-0 sudo[76379]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:03 compute-0 sudo[76532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icowipcaoysqyarfgswtsxpueygqicje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798703.5366735-49-91453419583382/AnsiballZ_command.py'
Nov 22 08:05:03 compute-0 sudo[76532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:04 compute-0 python3.9[76534]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:05:04 compute-0 sudo[76532]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:04 compute-0 sudo[76685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxnakenujteknmgtieagcqtyqliimjfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798704.2714903-57-130898385777456/AnsiballZ_stat.py'
Nov 22 08:05:04 compute-0 sudo[76685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:04 compute-0 python3.9[76687]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:05:04 compute-0 sudo[76685]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:05 compute-0 sudo[76839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khjybhikxlxknyjeotxrbalexxiyivpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798705.0088391-65-237310923143553/AnsiballZ_command.py'
Nov 22 08:05:05 compute-0 sudo[76839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:05 compute-0 python3.9[76841]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:05:05 compute-0 sudo[76839]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:06 compute-0 sudo[76994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soeksauuipaaufkfqonozejdldqlmjpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798705.6375053-73-133798641853109/AnsiballZ_file.py'
Nov 22 08:05:06 compute-0 sudo[76994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:06 compute-0 python3.9[76996]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:06 compute-0 sudo[76994]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:06 compute-0 sshd-session[75921]: Connection closed by 192.168.122.30 port 57474
Nov 22 08:05:06 compute-0 sshd-session[75918]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:05:06 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 22 08:05:06 compute-0 systemd[1]: session-17.scope: Consumed 4.191s CPU time.
Nov 22 08:05:06 compute-0 systemd-logind[826]: Session 17 logged out. Waiting for processes to exit.
Nov 22 08:05:06 compute-0 systemd-logind[826]: Removed session 17.
Nov 22 08:05:14 compute-0 sshd-session[77021]: Accepted publickey for zuul from 192.168.122.30 port 40280 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:05:14 compute-0 systemd-logind[826]: New session 18 of user zuul.
Nov 22 08:05:14 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 22 08:05:14 compute-0 sshd-session[77021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:05:15 compute-0 python3.9[77174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:05:15 compute-0 sudo[77328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkynbaejlbvljsfewacxvnzryglfhojo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798715.4514644-34-93916854652614/AnsiballZ_setup.py'
Nov 22 08:05:15 compute-0 sudo[77328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:16 compute-0 python3.9[77330]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:05:16 compute-0 sudo[77328]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:16 compute-0 sudo[77412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgsypsiofvfrvhhezcvwwythmdckeeao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798715.4514644-34-93916854652614/AnsiballZ_dnf.py'
Nov 22 08:05:16 compute-0 sudo[77412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:17 compute-0 python3.9[77414]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 08:05:18 compute-0 sudo[77412]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:19 compute-0 python3.9[77565]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:05:20 compute-0 python3.9[77716]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:05:21 compute-0 python3.9[77866]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:05:21 compute-0 python3.9[78016]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:05:22 compute-0 sshd-session[77024]: Connection closed by 192.168.122.30 port 40280
Nov 22 08:05:22 compute-0 sshd-session[77021]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:05:22 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 22 08:05:22 compute-0 systemd[1]: session-18.scope: Consumed 5.686s CPU time.
Nov 22 08:05:22 compute-0 systemd-logind[826]: Session 18 logged out. Waiting for processes to exit.
Nov 22 08:05:22 compute-0 systemd-logind[826]: Removed session 18.
Nov 22 08:05:30 compute-0 sshd-session[78041]: Accepted publickey for zuul from 192.168.122.30 port 56054 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:05:30 compute-0 systemd-logind[826]: New session 19 of user zuul.
Nov 22 08:05:30 compute-0 systemd[1]: Started Session 19 of User zuul.
Nov 22 08:05:30 compute-0 sshd-session[78041]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:05:31 compute-0 python3.9[78194]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:05:33 compute-0 sudo[78348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ravdekgnlkithfmlhqfakuqfqsbubywe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798732.7232945-50-253935439197068/AnsiballZ_file.py'
Nov 22 08:05:33 compute-0 sudo[78348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:33 compute-0 python3.9[78350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:33 compute-0 sudo[78348]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:33 compute-0 sudo[78500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uubbeeehkoklnhqnjtuzorcjgierqfdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798733.4533343-50-269130410373179/AnsiballZ_file.py'
Nov 22 08:05:33 compute-0 sudo[78500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:33 compute-0 python3.9[78502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:33 compute-0 sudo[78500]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:34 compute-0 sudo[78652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sacsfpkewrynnjcvxwunamyckxvlglog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798734.0608823-65-123985396202853/AnsiballZ_stat.py'
Nov 22 08:05:34 compute-0 sudo[78652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:34 compute-0 python3.9[78654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:34 compute-0 sudo[78652]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:35 compute-0 sudo[78775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spjdogxnczsugwsxkzybzxburysbtkae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798734.0608823-65-123985396202853/AnsiballZ_copy.py'
Nov 22 08:05:35 compute-0 sudo[78775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:35 compute-0 python3.9[78777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798734.0608823-65-123985396202853/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7cd6b0f71ca884c9c5e8b4dd82398237ef1748e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:35 compute-0 sudo[78775]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:35 compute-0 sudo[78927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkysnwhpnrcbckqdszdsfgzuotzgrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798735.4481654-65-189754178281690/AnsiballZ_stat.py'
Nov 22 08:05:35 compute-0 sudo[78927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:35 compute-0 python3.9[78929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:35 compute-0 sudo[78927]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:36 compute-0 sudo[79050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-delemxslqswvghfrxksuobjuhnngqeeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798735.4481654-65-189754178281690/AnsiballZ_copy.py'
Nov 22 08:05:36 compute-0 sudo[79050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:36 compute-0 python3.9[79052]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798735.4481654-65-189754178281690/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e3b1a1d3bb995823a4997f228aae1979601051a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:36 compute-0 sudo[79050]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:36 compute-0 sudo[79202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcgnjnvdygnomswzxusmlbdkzsvdzcle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798736.527201-65-187910653615795/AnsiballZ_stat.py'
Nov 22 08:05:36 compute-0 sudo[79202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:36 compute-0 python3.9[79204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:36 compute-0 sudo[79202]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:37 compute-0 sudo[79325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfkxvebiaefpdhaxilaignvadzqptlnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798736.527201-65-187910653615795/AnsiballZ_copy.py'
Nov 22 08:05:37 compute-0 sudo[79325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:37 compute-0 python3.9[79327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798736.527201-65-187910653615795/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8899e28e7f57fa3bdcdb6c44d8ebdd014e967e3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:37 compute-0 sudo[79325]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:38 compute-0 sudo[79477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eolnbjpsbdkqgaiiscjykdfgqejgiacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798737.9970284-109-39744695870361/AnsiballZ_file.py'
Nov 22 08:05:38 compute-0 sudo[79477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:38 compute-0 python3.9[79479]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:38 compute-0 sudo[79477]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:38 compute-0 sudo[79629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnjpwpimmbknfgdwczpamkkwyszvoeok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798738.6014178-109-247267919045958/AnsiballZ_file.py'
Nov 22 08:05:38 compute-0 sudo[79629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:39 compute-0 python3.9[79631]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:39 compute-0 sudo[79629]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:39 compute-0 sudo[79781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrwmddnnisditslodofyckptibpnwine ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798739.2949598-124-272320177536415/AnsiballZ_stat.py'
Nov 22 08:05:39 compute-0 sudo[79781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:39 compute-0 python3.9[79783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:39 compute-0 sudo[79781]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:40 compute-0 sudo[79904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohebdeuailwffqlwicozxcnysieihmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798739.2949598-124-272320177536415/AnsiballZ_copy.py'
Nov 22 08:05:40 compute-0 sudo[79904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:40 compute-0 python3.9[79906]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798739.2949598-124-272320177536415/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=adec66757ef36d8ea8169d366ad21837943ebbb3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:40 compute-0 sudo[79904]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:40 compute-0 sudo[80056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjbgrbdvechfjhjzcwuvmfojwxxrtbjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798740.4084623-124-173945565602918/AnsiballZ_stat.py'
Nov 22 08:05:40 compute-0 sudo[80056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:41 compute-0 python3.9[80058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:41 compute-0 sudo[80056]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:41 compute-0 sudo[80179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwlkojhkdfhmrwyvvoajghfxyzwynkuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798740.4084623-124-173945565602918/AnsiballZ_copy.py'
Nov 22 08:05:41 compute-0 sudo[80179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:41 compute-0 python3.9[80181]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798740.4084623-124-173945565602918/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e3b1a1d3bb995823a4997f228aae1979601051a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:41 compute-0 sudo[80179]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:41 compute-0 sudo[80331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erznnnaninlpwhrturdiljrcpkxvlisb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798741.7182436-124-55258880674095/AnsiballZ_stat.py'
Nov 22 08:05:41 compute-0 sudo[80331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:42 compute-0 python3.9[80333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:42 compute-0 sudo[80331]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:42 compute-0 sudo[80454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgfiemqlmvgxipjtfnuhtwenlodkdsfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798741.7182436-124-55258880674095/AnsiballZ_copy.py'
Nov 22 08:05:42 compute-0 sudo[80454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:42 compute-0 python3.9[80456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798741.7182436-124-55258880674095/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=469b6288ee1a6365f7743290fd102594bbf362bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:42 compute-0 sudo[80454]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:43 compute-0 sudo[80606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwnznwferprhlhufcksybastokuionvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798742.9756548-168-49018993232616/AnsiballZ_file.py'
Nov 22 08:05:43 compute-0 sudo[80606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:43 compute-0 python3.9[80608]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:43 compute-0 sudo[80606]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:43 compute-0 sudo[80758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbjyxhyechosdlrnmkdeyiveavmymegw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798743.6371953-168-145356544546511/AnsiballZ_file.py'
Nov 22 08:05:43 compute-0 sudo[80758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:44 compute-0 python3.9[80760]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:44 compute-0 sudo[80758]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:44 compute-0 sudo[80910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otkwgikbifrpljribteefmklntbdlelu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798744.323126-183-254976988775714/AnsiballZ_stat.py'
Nov 22 08:05:44 compute-0 sudo[80910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:44 compute-0 python3.9[80912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:44 compute-0 sudo[80910]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:45 compute-0 sudo[81033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpsjdgmiqglyyulahcftjkxvpetctqay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798744.323126-183-254976988775714/AnsiballZ_copy.py'
Nov 22 08:05:45 compute-0 sudo[81033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:45 compute-0 python3.9[81035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798744.323126-183-254976988775714/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=196ba55610aabbc850984677674c598c3b367d19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:45 compute-0 sudo[81033]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:45 compute-0 sudo[81185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkmjkcvzhbhupofuxzalidjczqiybrmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798745.678888-183-183640913004934/AnsiballZ_stat.py'
Nov 22 08:05:45 compute-0 sudo[81185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:46 compute-0 python3.9[81187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:46 compute-0 sudo[81185]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:46 compute-0 sudo[81308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgfmsblwxjilsgjgbcujgwbjdwzoqawx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798745.678888-183-183640913004934/AnsiballZ_copy.py'
Nov 22 08:05:46 compute-0 sudo[81308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:46 compute-0 python3.9[81310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798745.678888-183-183640913004934/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a9a6df8537d759f2333d89d1ff33cdbc82a1f599 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:46 compute-0 sudo[81308]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:46 compute-0 sudo[81460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-febmwlnkraopxhtepcbysyukppxmldgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798746.7654126-183-205110692173317/AnsiballZ_stat.py'
Nov 22 08:05:46 compute-0 sudo[81460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:47 compute-0 python3.9[81462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:47 compute-0 sudo[81460]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:47 compute-0 sudo[81583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxsnwrfepdcxypqliicyydlkijdrrtss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798746.7654126-183-205110692173317/AnsiballZ_copy.py'
Nov 22 08:05:47 compute-0 sudo[81583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:47 compute-0 python3.9[81585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798746.7654126-183-205110692173317/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8020e29674be3284973d479eab88e487dd5004df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:47 compute-0 sudo[81583]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:48 compute-0 sudo[81735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-granogvytdoyfxywiapmrkrwwswjzuqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798747.9225492-227-59651427177331/AnsiballZ_file.py'
Nov 22 08:05:48 compute-0 sudo[81735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:48 compute-0 python3.9[81737]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:48 compute-0 sudo[81735]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:48 compute-0 sudo[81887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoaggbvsyhgrfoyoaowtaudaagibonuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798748.5052874-227-34897321466494/AnsiballZ_file.py'
Nov 22 08:05:48 compute-0 sudo[81887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:48 compute-0 python3.9[81889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:48 compute-0 sudo[81887]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:49 compute-0 sudo[82039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxknlwngrziydshepsutipmmxzvcemxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798749.2707233-242-36906110675073/AnsiballZ_stat.py'
Nov 22 08:05:49 compute-0 sudo[82039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:49 compute-0 python3.9[82041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:49 compute-0 sudo[82039]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:50 compute-0 sudo[82162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egkgahvuojizbaejbrlwszsxlwnjuwgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798749.2707233-242-36906110675073/AnsiballZ_copy.py'
Nov 22 08:05:50 compute-0 sudo[82162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:50 compute-0 python3.9[82164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798749.2707233-242-36906110675073/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=33286af91487eacc9790c5474c5166189ebfc953 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:50 compute-0 sudo[82162]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:50 compute-0 sudo[82314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keydcdtzjekhtlmebjpsvlpycpyiwbru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798750.3198986-242-105783738313950/AnsiballZ_stat.py'
Nov 22 08:05:50 compute-0 sudo[82314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:50 compute-0 python3.9[82316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:50 compute-0 sudo[82314]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:51 compute-0 sudo[82437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttwcppwsjxpyphzbrisiicfvnfunqilp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798750.3198986-242-105783738313950/AnsiballZ_copy.py'
Nov 22 08:05:51 compute-0 sudo[82437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:51 compute-0 python3.9[82439]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798750.3198986-242-105783738313950/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c5a5bb552af9fa8a07a80ed073e9f34df5b28cab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:51 compute-0 sudo[82437]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:51 compute-0 sudo[82589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzionbdglrupllsyamvsagntantseoxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798751.4216251-242-96867540361882/AnsiballZ_stat.py'
Nov 22 08:05:51 compute-0 sudo[82589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:51 compute-0 python3.9[82591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:51 compute-0 sudo[82589]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:52 compute-0 sudo[82712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vokjixuhdhoojaccqnernflanavergkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798751.4216251-242-96867540361882/AnsiballZ_copy.py'
Nov 22 08:05:52 compute-0 sudo[82712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:52 compute-0 python3.9[82714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798751.4216251-242-96867540361882/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4062c5605a50500db71f5d0858381aa4eba587d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:52 compute-0 sudo[82712]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:52 compute-0 sudo[82864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbltqedhwbrjphbbobbctmtvziayfwan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798752.5425155-286-51423885656763/AnsiballZ_file.py'
Nov 22 08:05:52 compute-0 sudo[82864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:52 compute-0 python3.9[82866]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:52 compute-0 sudo[82864]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:53 compute-0 sudo[83016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prmnosmjcuekojocnlkxhjobgryuqnbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798753.0986273-286-197140402668390/AnsiballZ_file.py'
Nov 22 08:05:53 compute-0 sudo[83016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:53 compute-0 python3.9[83018]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:53 compute-0 sudo[83016]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:54 compute-0 sudo[83168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yswyohjwbtetfccfukehynpdowsxiyfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798753.753056-301-237943628633679/AnsiballZ_stat.py'
Nov 22 08:05:54 compute-0 sudo[83168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:54 compute-0 python3.9[83170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:54 compute-0 sudo[83168]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:54 compute-0 sudo[83291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uciuflbqlsucqqvphxjgxqowkjeynchs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798753.753056-301-237943628633679/AnsiballZ_copy.py'
Nov 22 08:05:54 compute-0 sudo[83291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:54 compute-0 python3.9[83293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798753.753056-301-237943628633679/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8bb15f922429a4f5e8c36faa0b2e75e670a0402d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:54 compute-0 sudo[83291]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:55 compute-0 sudo[83443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aixmpaklvxcmgvhrafswkgadicvvffjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798754.8503568-301-109066326519299/AnsiballZ_stat.py'
Nov 22 08:05:55 compute-0 sudo[83443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:55 compute-0 python3.9[83445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:55 compute-0 sudo[83443]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:55 compute-0 sudo[83566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmlgscmkwanmgvbmqmqgjovjsudwruz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798754.8503568-301-109066326519299/AnsiballZ_copy.py'
Nov 22 08:05:55 compute-0 sudo[83566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:55 compute-0 python3.9[83568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798754.8503568-301-109066326519299/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a9a6df8537d759f2333d89d1ff33cdbc82a1f599 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:55 compute-0 sudo[83566]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:56 compute-0 sudo[83718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxhjbxwylvaskhfiiidnlslrwmpvhie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798755.9473572-301-241556801288746/AnsiballZ_stat.py'
Nov 22 08:05:56 compute-0 sudo[83718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:56 compute-0 python3.9[83720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:56 compute-0 sudo[83718]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:56 compute-0 sudo[83841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znkupidmgqzsoxfsngxrvnzldcawqsxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798755.9473572-301-241556801288746/AnsiballZ_copy.py'
Nov 22 08:05:56 compute-0 sudo[83841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:57 compute-0 python3.9[83843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798755.9473572-301-241556801288746/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=24e640589cc18c1a9eff874b76886bc356b0f19b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:57 compute-0 sudo[83841]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:58 compute-0 sudo[83993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcckhqjfprvtvaexdibzbjfnhkdnrunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798757.808506-361-9296093722848/AnsiballZ_file.py'
Nov 22 08:05:58 compute-0 sudo[83993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:58 compute-0 python3.9[83995]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:05:58 compute-0 sudo[83993]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:58 compute-0 sudo[84145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olmzktmhxsezimggklvbynqchigxscvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798758.455765-369-98254576131129/AnsiballZ_stat.py'
Nov 22 08:05:58 compute-0 sudo[84145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:59 compute-0 python3.9[84147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:05:59 compute-0 sudo[84145]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:59 compute-0 sudo[84268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oocrzeteeujdyeqfyjgakncpeiwfkxne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798758.455765-369-98254576131129/AnsiballZ_copy.py'
Nov 22 08:05:59 compute-0 sudo[84268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:05:59 compute-0 python3.9[84270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798758.455765-369-98254576131129/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:05:59 compute-0 sudo[84268]: pam_unix(sudo:session): session closed for user root
Nov 22 08:05:59 compute-0 chronyd[65760]: Selected source 192.95.0.223 (pool.ntp.org)
Nov 22 08:06:00 compute-0 sudo[84420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwnymyckcxjvbxufzmkectlizarvrkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798759.799576-385-48464302687700/AnsiballZ_file.py'
Nov 22 08:06:00 compute-0 sudo[84420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:00 compute-0 python3.9[84422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:00 compute-0 sudo[84420]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:00 compute-0 sudo[84572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boaxywtfgwfookwzwlmbysustzniixyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798760.4255197-393-83954042528809/AnsiballZ_stat.py'
Nov 22 08:06:00 compute-0 sudo[84572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:00 compute-0 python3.9[84574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:00 compute-0 sudo[84572]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:01 compute-0 sudo[84695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itfxxyinjlmdlhdsvqtvwkdcdriaxjzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798760.4255197-393-83954042528809/AnsiballZ_copy.py'
Nov 22 08:06:01 compute-0 sudo[84695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:01 compute-0 anacron[50953]: Job `cron.daily' started
Nov 22 08:06:01 compute-0 anacron[50953]: Job `cron.daily' terminated
Nov 22 08:06:01 compute-0 python3.9[84697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798760.4255197-393-83954042528809/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:01 compute-0 sudo[84695]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:01 compute-0 sudo[84849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bshneqgqoqrvtnavycytqrcoylrffdeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798761.6347723-409-127662364744409/AnsiballZ_file.py'
Nov 22 08:06:01 compute-0 sudo[84849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:02 compute-0 python3.9[84851]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:02 compute-0 sudo[84849]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:02 compute-0 sudo[85001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugumlroyrxkrkenxyaiwdjbmehkzxcwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798762.3434637-417-262199504350228/AnsiballZ_stat.py'
Nov 22 08:06:02 compute-0 sudo[85001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:02 compute-0 python3.9[85003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:02 compute-0 sudo[85001]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:03 compute-0 sudo[85124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbjcqiboohnkqtoosrklrqdmanebjdox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798762.3434637-417-262199504350228/AnsiballZ_copy.py'
Nov 22 08:06:03 compute-0 sudo[85124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:03 compute-0 python3.9[85126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798762.3434637-417-262199504350228/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:03 compute-0 sudo[85124]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:03 compute-0 sudo[85276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdwrbalwanqcdbneblkmqxvjtryizazh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798763.575631-433-199262434310457/AnsiballZ_file.py'
Nov 22 08:06:03 compute-0 sudo[85276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:04 compute-0 python3.9[85278]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:04 compute-0 sudo[85276]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:04 compute-0 sudo[85428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguhqfsgqjtynzunioskfljahxkdfmiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798764.2268565-441-34326921257898/AnsiballZ_stat.py'
Nov 22 08:06:04 compute-0 sudo[85428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:04 compute-0 python3.9[85430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:04 compute-0 sudo[85428]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:05 compute-0 sudo[85551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhazacqhsfvcfoajlmqpmafcpxksvzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798764.2268565-441-34326921257898/AnsiballZ_copy.py'
Nov 22 08:06:05 compute-0 sudo[85551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:05 compute-0 python3.9[85553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798764.2268565-441-34326921257898/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:05 compute-0 sudo[85551]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:05 compute-0 sudo[85703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwphscburljkukjkknxnoaqrpiipsash ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798765.4813182-457-88637170062461/AnsiballZ_file.py'
Nov 22 08:06:05 compute-0 sudo[85703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:06 compute-0 python3.9[85705]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:06 compute-0 sudo[85703]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:06 compute-0 sudo[85855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwqkaunztgeculxczsvgcftludasfsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798766.2873116-465-43965670092911/AnsiballZ_stat.py'
Nov 22 08:06:06 compute-0 sudo[85855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:06 compute-0 python3.9[85857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:06 compute-0 sudo[85855]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:07 compute-0 sudo[85978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axofniirwohzzebrbdwvhiorqpdnxlnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798766.2873116-465-43965670092911/AnsiballZ_copy.py'
Nov 22 08:06:07 compute-0 sudo[85978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:07 compute-0 python3.9[85980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798766.2873116-465-43965670092911/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:07 compute-0 sudo[85978]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:07 compute-0 sudo[86130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byxqrbpezzjodfwvttijkewxaqhqgwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798767.5475268-481-255286396538277/AnsiballZ_file.py'
Nov 22 08:06:07 compute-0 sudo[86130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:08 compute-0 python3.9[86132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:08 compute-0 sudo[86130]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:08 compute-0 sudo[86282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukjskmjqstmveuwxlwfehefygklhqras ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798768.1864338-489-131267748342876/AnsiballZ_stat.py'
Nov 22 08:06:08 compute-0 sudo[86282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:08 compute-0 python3.9[86284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:08 compute-0 sudo[86282]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:08 compute-0 sudo[86405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lipdczkqkmumujezmykimxngfluiqauf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798768.1864338-489-131267748342876/AnsiballZ_copy.py'
Nov 22 08:06:08 compute-0 sudo[86405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:09 compute-0 python3.9[86407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798768.1864338-489-131267748342876/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:09 compute-0 sudo[86405]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:09 compute-0 sudo[86557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtgubbxvdogfclfueaxyvhcociqruksp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798769.5613647-505-79003658725478/AnsiballZ_file.py'
Nov 22 08:06:09 compute-0 sudo[86557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:10 compute-0 python3.9[86559]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:10 compute-0 sudo[86557]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:10 compute-0 sudo[86709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxanhknojlepsfpczgaecgmgwfngkhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798770.2590144-513-280198859655837/AnsiballZ_stat.py'
Nov 22 08:06:10 compute-0 sudo[86709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:10 compute-0 python3.9[86711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:10 compute-0 sudo[86709]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:11 compute-0 sudo[86832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laizlttqwltprmjnglupsbglgciqaknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798770.2590144-513-280198859655837/AnsiballZ_copy.py'
Nov 22 08:06:11 compute-0 sudo[86832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:11 compute-0 python3.9[86834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798770.2590144-513-280198859655837/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:11 compute-0 sudo[86832]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:11 compute-0 sudo[86984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqmsglewcddrviznovjlupnkheulrnua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798771.5795572-529-136817922244620/AnsiballZ_file.py'
Nov 22 08:06:11 compute-0 sudo[86984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:12 compute-0 python3.9[86986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:12 compute-0 sudo[86984]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:12 compute-0 sudo[87136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhfsshnrbqzqecamffewlchqfhkyzlrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798772.2066898-537-167324832881843/AnsiballZ_stat.py'
Nov 22 08:06:12 compute-0 sudo[87136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:12 compute-0 python3.9[87138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:12 compute-0 sudo[87136]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:12 compute-0 sudo[87259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbvnanpllatrpapgbjvjsufvtruhoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798772.2066898-537-167324832881843/AnsiballZ_copy.py'
Nov 22 08:06:12 compute-0 sudo[87259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:13 compute-0 python3.9[87261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798772.2066898-537-167324832881843/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d7d3c223199da9fcef714ed30a45020930d987d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:13 compute-0 sudo[87259]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:13 compute-0 sshd-session[78044]: Connection closed by 192.168.122.30 port 56054
Nov 22 08:06:13 compute-0 sshd-session[78041]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:06:13 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 22 08:06:13 compute-0 systemd[1]: session-19.scope: Consumed 32.040s CPU time.
Nov 22 08:06:13 compute-0 systemd-logind[826]: Session 19 logged out. Waiting for processes to exit.
Nov 22 08:06:13 compute-0 systemd-logind[826]: Removed session 19.
Nov 22 08:06:19 compute-0 sshd-session[87286]: Accepted publickey for zuul from 192.168.122.30 port 56258 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:06:19 compute-0 systemd-logind[826]: New session 20 of user zuul.
Nov 22 08:06:19 compute-0 systemd[1]: Started Session 20 of User zuul.
Nov 22 08:06:19 compute-0 sshd-session[87286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:06:20 compute-0 python3.9[87439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:06:21 compute-0 sudo[87593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duwrxyoxaijhvyfljnslpwsycueydwnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798781.3691676-34-72542339174116/AnsiballZ_file.py'
Nov 22 08:06:21 compute-0 sudo[87593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:21 compute-0 python3.9[87595]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:21 compute-0 sudo[87593]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:22 compute-0 sudo[87745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gupftvfvrzgczstpzppzuaetgqsxsjwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798782.1026645-34-195082487050785/AnsiballZ_file.py'
Nov 22 08:06:22 compute-0 sudo[87745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:22 compute-0 python3.9[87747]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:22 compute-0 sudo[87745]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:23 compute-0 python3.9[87897]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:06:23 compute-0 sudo[88047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbknzfovuuxkgarruolcoglepxrjfpdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798783.3573048-57-268301213992303/AnsiballZ_seboolean.py'
Nov 22 08:06:23 compute-0 sudo[88047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:23 compute-0 python3.9[88049]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 08:06:25 compute-0 sudo[88047]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:25 compute-0 sudo[88203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iultebymozxfxnaewccrtbrukipzpznz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798785.364178-67-187210610707772/AnsiballZ_setup.py'
Nov 22 08:06:25 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 22 08:06:25 compute-0 sudo[88203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:25 compute-0 python3.9[88205]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:06:26 compute-0 sudo[88203]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:26 compute-0 sudo[88287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imjxvpkepaezodirchemhhjlpufgfqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798785.364178-67-187210610707772/AnsiballZ_dnf.py'
Nov 22 08:06:26 compute-0 sudo[88287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:26 compute-0 python3.9[88289]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:06:28 compute-0 sudo[88287]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:28 compute-0 sudo[88440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awqwqkzewonbdkgjuncwqreefilojusd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798788.212905-79-197579137352778/AnsiballZ_systemd.py'
Nov 22 08:06:28 compute-0 sudo[88440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:29 compute-0 python3.9[88442]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:06:29 compute-0 sudo[88440]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:29 compute-0 sudo[88595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eprentyttcafeqiazevjnraydtyahpoh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763798789.2592545-87-152775010956228/AnsiballZ_edpm_nftables_snippet.py'
Nov 22 08:06:29 compute-0 sudo[88595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:29 compute-0 python3[88597]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 22 08:06:29 compute-0 sudo[88595]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:30 compute-0 sudo[88747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raewrjfogekjsjcghqhnvkpxuqzcltjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798790.1209295-96-50213798239936/AnsiballZ_file.py'
Nov 22 08:06:30 compute-0 sudo[88747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:30 compute-0 python3.9[88749]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:30 compute-0 sudo[88747]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:31 compute-0 sudo[88900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzfbrfgvlvmluyxwyfotibfkctkejll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798790.7123184-104-8266800348401/AnsiballZ_stat.py'
Nov 22 08:06:31 compute-0 sudo[88900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:31 compute-0 python3.9[88902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:31 compute-0 sudo[88900]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:31 compute-0 sudo[88978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixrnxsxnealcjumeynwyizlqimihpkuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798790.7123184-104-8266800348401/AnsiballZ_file.py'
Nov 22 08:06:31 compute-0 sudo[88978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:31 compute-0 python3.9[88980]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:31 compute-0 sudo[88978]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:32 compute-0 sudo[89130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wauwjjkoifaxiigldkxmokhtwotekkcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798791.869702-116-135957785468995/AnsiballZ_stat.py'
Nov 22 08:06:32 compute-0 sudo[89130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:32 compute-0 python3.9[89132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:32 compute-0 sudo[89130]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:32 compute-0 sudo[89208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmmlphuzpbcdlfempvvsfxpulbcvavfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798791.869702-116-135957785468995/AnsiballZ_file.py'
Nov 22 08:06:32 compute-0 sudo[89208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:32 compute-0 python3.9[89210]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1428ds1r recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:32 compute-0 sudo[89208]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:33 compute-0 sudo[89360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsnqdlqxwzblmwwlpqnsfoqkikrxsux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798792.9165318-128-163580116399938/AnsiballZ_stat.py'
Nov 22 08:06:33 compute-0 sudo[89360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:33 compute-0 python3.9[89362]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:33 compute-0 sudo[89360]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:33 compute-0 sudo[89438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeylokevvczkxxnwkjdczkyozzmxufnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798792.9165318-128-163580116399938/AnsiballZ_file.py'
Nov 22 08:06:33 compute-0 sudo[89438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:33 compute-0 python3.9[89440]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:33 compute-0 sudo[89438]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:34 compute-0 sudo[89590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlawduqwdezumrwjroegnxiuyywsawap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798793.9476786-141-38208878296859/AnsiballZ_command.py'
Nov 22 08:06:34 compute-0 sudo[89590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:34 compute-0 python3.9[89592]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:06:34 compute-0 sudo[89590]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:35 compute-0 sudo[89743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtaduaagunxgedshybjsqnxigdbfhfx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763798794.8018887-149-146032483885367/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 08:06:35 compute-0 sudo[89743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:35 compute-0 python3[89745]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 08:06:35 compute-0 sudo[89743]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:35 compute-0 sudo[89895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srbedvnnsgfqykufxpelhqrqofpqqbfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798795.6040854-157-148324011379072/AnsiballZ_stat.py'
Nov 22 08:06:35 compute-0 sudo[89895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:36 compute-0 python3.9[89897]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:36 compute-0 sudo[89895]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:36 compute-0 sudo[90020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asmihtryzljsirbbnkabaepkqkywurej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798795.6040854-157-148324011379072/AnsiballZ_copy.py'
Nov 22 08:06:36 compute-0 sudo[90020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:36 compute-0 python3.9[90022]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798795.6040854-157-148324011379072/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:36 compute-0 sudo[90020]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:37 compute-0 sudo[90172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtbpzcsdissdtjsbrvbizzefiaxhzjnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798797.0013485-172-157480908880941/AnsiballZ_stat.py'
Nov 22 08:06:37 compute-0 sudo[90172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:37 compute-0 python3.9[90174]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:37 compute-0 sudo[90172]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:37 compute-0 sudo[90297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhywsgdjvrdvpfuvutkegfurfmehovan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798797.0013485-172-157480908880941/AnsiballZ_copy.py'
Nov 22 08:06:37 compute-0 sudo[90297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:38 compute-0 python3.9[90299]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798797.0013485-172-157480908880941/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:38 compute-0 sudo[90297]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:38 compute-0 sudo[90449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziubguuckqomdknjgphkfzdgpbgouiwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798798.1623485-187-57696742156405/AnsiballZ_stat.py'
Nov 22 08:06:38 compute-0 sudo[90449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:38 compute-0 python3.9[90451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:38 compute-0 sudo[90449]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:38 compute-0 sudo[90574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgmsblrjhxyvnojocopsvajhbzulats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798798.1623485-187-57696742156405/AnsiballZ_copy.py'
Nov 22 08:06:38 compute-0 sudo[90574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:39 compute-0 python3.9[90576]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798798.1623485-187-57696742156405/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:39 compute-0 sudo[90574]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:39 compute-0 sudo[90726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjbqbdhhghcmgioxexalesqsuctlfeug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798799.3463926-202-13057465164068/AnsiballZ_stat.py'
Nov 22 08:06:39 compute-0 sudo[90726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:39 compute-0 python3.9[90728]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:39 compute-0 sudo[90726]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:40 compute-0 sudo[90851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eckjjrywcfsfyhnlwqaidbkonzzhlxwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798799.3463926-202-13057465164068/AnsiballZ_copy.py'
Nov 22 08:06:40 compute-0 sudo[90851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:40 compute-0 python3.9[90853]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798799.3463926-202-13057465164068/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:40 compute-0 sudo[90851]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:40 compute-0 sudo[91003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzcqvxdhakjjhnqlvogottxastwgscts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798800.580994-217-209456131769122/AnsiballZ_stat.py'
Nov 22 08:06:40 compute-0 sudo[91003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:41 compute-0 python3.9[91005]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:41 compute-0 sudo[91003]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:41 compute-0 sudo[91128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cabprkumdsljqidlqtynfvkoaiypjtxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798800.580994-217-209456131769122/AnsiballZ_copy.py'
Nov 22 08:06:41 compute-0 sudo[91128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:41 compute-0 python3.9[91130]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763798800.580994-217-209456131769122/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:41 compute-0 sudo[91128]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:42 compute-0 sudo[91280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzbzvwgcynrykcwzxprwufbagabycgpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798801.7915783-232-124863332443121/AnsiballZ_file.py'
Nov 22 08:06:42 compute-0 sudo[91280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:42 compute-0 python3.9[91282]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:42 compute-0 sudo[91280]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:42 compute-0 sudo[91432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opooiixsorvvbigpbgrgpwawxsidnboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798802.3878148-240-198131719936982/AnsiballZ_command.py'
Nov 22 08:06:42 compute-0 sudo[91432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:42 compute-0 python3.9[91434]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:06:42 compute-0 sudo[91432]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:43 compute-0 sudo[91587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkjgtwgvqdvhqaqnqionjsbvvrywtpey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798802.9811254-248-15783646218321/AnsiballZ_blockinfile.py'
Nov 22 08:06:43 compute-0 sudo[91587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:43 compute-0 python3.9[91589]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:43 compute-0 sudo[91587]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:44 compute-0 sudo[91739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dugmhlkcfxocqcabdykhzzvwpkiienss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798803.8314595-257-92658270681835/AnsiballZ_command.py'
Nov 22 08:06:44 compute-0 sudo[91739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:44 compute-0 python3.9[91741]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:06:44 compute-0 sudo[91739]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:44 compute-0 sudo[91892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjvdqtgicwmvwxqhiwshcyxyyzmtstqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798804.4891615-265-105012005585825/AnsiballZ_stat.py'
Nov 22 08:06:44 compute-0 sudo[91892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:45 compute-0 python3.9[91894]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:06:45 compute-0 sudo[91892]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:45 compute-0 sudo[92046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmflwdbtfyxmwehrrcfmsqrfdhvmlfuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798805.2224915-273-23739991334556/AnsiballZ_command.py'
Nov 22 08:06:45 compute-0 sudo[92046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:45 compute-0 python3.9[92048]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:06:45 compute-0 sudo[92046]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:46 compute-0 sudo[92201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-helsnrspvkfgzyofrprrnjuvvnqvwfdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798805.8479178-281-72806819169829/AnsiballZ_file.py'
Nov 22 08:06:46 compute-0 sudo[92201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:46 compute-0 python3.9[92203]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:46 compute-0 sudo[92201]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:47 compute-0 python3.9[92353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:06:48 compute-0 sudo[92504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdmyqklmwkbiycvuvrdqelhnijhslsqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798808.178884-321-46703921141867/AnsiballZ_command.py'
Nov 22 08:06:48 compute-0 sudo[92504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:48 compute-0 python3.9[92506]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:06:48 compute-0 ovs-vsctl[92507]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 22 08:06:48 compute-0 sudo[92504]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:49 compute-0 sudo[92657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaniwkomiengpjxpyrohvasmjaqdjqfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798808.8485963-330-38907002432153/AnsiballZ_command.py'
Nov 22 08:06:49 compute-0 sudo[92657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:49 compute-0 python3.9[92659]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:06:49 compute-0 sudo[92657]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:49 compute-0 sudo[92812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafvkgudfmwbopxrsbrarskcxidfrwix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798809.497127-338-200513515211376/AnsiballZ_command.py'
Nov 22 08:06:49 compute-0 sudo[92812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:49 compute-0 python3.9[92814]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:06:49 compute-0 ovs-vsctl[92815]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 22 08:06:50 compute-0 sudo[92812]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:50 compute-0 python3.9[92965]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:06:51 compute-0 sudo[93117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftolbnszqeivhohlldwcmspifhdkkxfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798810.8176572-355-203115884411454/AnsiballZ_file.py'
Nov 22 08:06:51 compute-0 sudo[93117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:51 compute-0 python3.9[93119]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:51 compute-0 sudo[93117]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:51 compute-0 sudo[93269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aotumnnsdcjpctdqxamarhqtuuqbeztf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798811.4005022-363-128823072093656/AnsiballZ_stat.py'
Nov 22 08:06:51 compute-0 sudo[93269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:51 compute-0 python3.9[93271]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:51 compute-0 sudo[93269]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:52 compute-0 sudo[93347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okycjexfujsgrddkiaqjdrwlzrcwwclb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798811.4005022-363-128823072093656/AnsiballZ_file.py'
Nov 22 08:06:52 compute-0 sudo[93347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:52 compute-0 python3.9[93349]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:52 compute-0 sudo[93347]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:52 compute-0 sudo[93499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcwjietrvmafyqwpbmcytbvxwrgyehpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798812.4048574-363-204832758095525/AnsiballZ_stat.py'
Nov 22 08:06:52 compute-0 sudo[93499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:52 compute-0 python3.9[93501]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:52 compute-0 sudo[93499]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:53 compute-0 sudo[93577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjsuxjhoiykkvhoautvhmpfepdikzbfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798812.4048574-363-204832758095525/AnsiballZ_file.py'
Nov 22 08:06:53 compute-0 sudo[93577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:53 compute-0 python3.9[93579]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:06:53 compute-0 sudo[93577]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:53 compute-0 sudo[93729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coohifvwtavujrfyorbsmchbheudjmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798813.4587796-386-9900575746338/AnsiballZ_file.py'
Nov 22 08:06:53 compute-0 sudo[93729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:53 compute-0 python3.9[93731]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:54 compute-0 sudo[93729]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:54 compute-0 sudo[93881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqogfqcgwpsnxulzdgwlidwdaavcwkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798814.1456218-394-170601937862899/AnsiballZ_stat.py'
Nov 22 08:06:54 compute-0 sudo[93881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:54 compute-0 python3.9[93883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:54 compute-0 sudo[93881]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:54 compute-0 sudo[93959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eefltwnbvfdnxuedmqudhxoplijxradl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798814.1456218-394-170601937862899/AnsiballZ_file.py'
Nov 22 08:06:54 compute-0 sudo[93959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:54 compute-0 python3.9[93961]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:55 compute-0 sudo[93959]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:55 compute-0 sudo[94111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhkmqpguhslpzqelcueqrxiwkjohrqyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798815.13042-406-88780706128563/AnsiballZ_stat.py'
Nov 22 08:06:55 compute-0 sudo[94111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:55 compute-0 python3.9[94113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:55 compute-0 sudo[94111]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:55 compute-0 sudo[94189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptysvwgysklejddvvmstctwfhbdhrnwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798815.13042-406-88780706128563/AnsiballZ_file.py'
Nov 22 08:06:55 compute-0 sudo[94189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:55 compute-0 python3.9[94191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:55 compute-0 sudo[94189]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:56 compute-0 sudo[94341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfvllldcsonbilxlojerbbemkbwqccqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798816.1463487-418-52814693554361/AnsiballZ_systemd.py'
Nov 22 08:06:56 compute-0 sudo[94341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:56 compute-0 python3.9[94343]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:06:56 compute-0 systemd[1]: Reloading.
Nov 22 08:06:56 compute-0 systemd-sysv-generator[94374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:06:56 compute-0 systemd-rc-local-generator[94367]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:06:57 compute-0 sudo[94341]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:57 compute-0 sudo[94530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuebsmcuisokapuljcxzocboqgdjinaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798817.1779072-426-74236018156450/AnsiballZ_stat.py'
Nov 22 08:06:57 compute-0 sudo[94530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:57 compute-0 python3.9[94532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:57 compute-0 sudo[94530]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:57 compute-0 sudo[94608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujpawdsbqdnpobmvmbhykvbayhsprqye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798817.1779072-426-74236018156450/AnsiballZ_file.py'
Nov 22 08:06:57 compute-0 sudo[94608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:58 compute-0 python3.9[94610]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:58 compute-0 sudo[94608]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:58 compute-0 sudo[94760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihvlfrjlkqhvpgwagqsdzxtyjscgfba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798818.1957834-438-164665743777992/AnsiballZ_stat.py'
Nov 22 08:06:58 compute-0 sudo[94760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:58 compute-0 python3.9[94762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:06:58 compute-0 sudo[94760]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:58 compute-0 sudo[94838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwkflsfbexqpxvvxbsuhqfauudlcfkmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798818.1957834-438-164665743777992/AnsiballZ_file.py'
Nov 22 08:06:58 compute-0 sudo[94838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:59 compute-0 python3.9[94840]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:06:59 compute-0 sudo[94838]: pam_unix(sudo:session): session closed for user root
Nov 22 08:06:59 compute-0 sudo[94990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owknsflcswkwjixmytkqdqyynxsmqcta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798819.1799505-450-9325680663807/AnsiballZ_systemd.py'
Nov 22 08:06:59 compute-0 sudo[94990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:06:59 compute-0 python3.9[94992]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:06:59 compute-0 systemd[1]: Reloading.
Nov 22 08:06:59 compute-0 systemd-rc-local-generator[95022]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:06:59 compute-0 systemd-sysv-generator[95025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:06:59 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 08:07:00 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 08:07:00 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 08:07:00 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 08:07:00 compute-0 sudo[94990]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:00 compute-0 sudo[95185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqephafaxaripttfagestyqqeixaoigx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798820.244978-460-277359088415898/AnsiballZ_file.py'
Nov 22 08:07:00 compute-0 sudo[95185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:00 compute-0 python3.9[95187]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:00 compute-0 sudo[95185]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:01 compute-0 sudo[95337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnoscqconrbnyxdxmawtafeepgsuuiln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798820.8748398-468-252252570548875/AnsiballZ_stat.py'
Nov 22 08:07:01 compute-0 sudo[95337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:01 compute-0 python3.9[95339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:01 compute-0 sudo[95337]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:01 compute-0 sudo[95460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idxkwgmhrixnfjxanmjywkbvqxfderjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798820.8748398-468-252252570548875/AnsiballZ_copy.py'
Nov 22 08:07:01 compute-0 sudo[95460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:01 compute-0 python3.9[95462]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798820.8748398-468-252252570548875/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:01 compute-0 sudo[95460]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:02 compute-0 sudo[95612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wibohjfmrxgirvqideeldekrpzmojxpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798822.2235444-485-47327568023869/AnsiballZ_file.py'
Nov 22 08:07:02 compute-0 sudo[95612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:02 compute-0 python3.9[95614]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:02 compute-0 sudo[95612]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:03 compute-0 sudo[95764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcyzlexgcvdaybpzweiebfxrujoexdvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798822.9052505-493-21301595469607/AnsiballZ_stat.py'
Nov 22 08:07:03 compute-0 sudo[95764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:03 compute-0 python3.9[95766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:03 compute-0 sudo[95764]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:03 compute-0 sudo[95887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aekwthyberghxftycctdtjqtnxlbhjfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798822.9052505-493-21301595469607/AnsiballZ_copy.py'
Nov 22 08:07:03 compute-0 sudo[95887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:03 compute-0 python3.9[95889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798822.9052505-493-21301595469607/.source.json _original_basename=.79scohmi follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:04 compute-0 sudo[95887]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:04 compute-0 sudo[96039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlxwferfvxevqxqevlxbgfgzcrqavbcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798824.2561626-508-29701640178353/AnsiballZ_file.py'
Nov 22 08:07:04 compute-0 sudo[96039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:04 compute-0 python3.9[96041]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:04 compute-0 sudo[96039]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:05 compute-0 sudo[96191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrlazthfjbezflhjfsodcchzxavvrazq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798825.049691-516-118996101768611/AnsiballZ_stat.py'
Nov 22 08:07:05 compute-0 sudo[96191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:05 compute-0 sudo[96191]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:05 compute-0 sudo[96314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmfclbhfvuebsukhblqeccaczsqhejng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798825.049691-516-118996101768611/AnsiballZ_copy.py'
Nov 22 08:07:05 compute-0 sudo[96314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:06 compute-0 sudo[96314]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:06 compute-0 sudo[96466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmvufwumxaerpyewrkgtqetnjwpbpdgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798826.4094267-533-78924025264337/AnsiballZ_container_config_data.py'
Nov 22 08:07:06 compute-0 sudo[96466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:07 compute-0 python3.9[96468]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 22 08:07:07 compute-0 sudo[96466]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:07 compute-0 sudo[96618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjflmsnbvhwmntylqyuwjrwgqkntbmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798827.4635181-542-85909400075372/AnsiballZ_container_config_hash.py'
Nov 22 08:07:07 compute-0 sudo[96618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:08 compute-0 python3.9[96620]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:07:08 compute-0 sudo[96618]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:08 compute-0 sudo[96770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjgepnuccojpykgqdyixkovveeukhsea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798828.4227173-551-101339027753152/AnsiballZ_podman_container_info.py'
Nov 22 08:07:08 compute-0 sudo[96770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:09 compute-0 python3.9[96772]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 08:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:07:09 compute-0 sudo[96770]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:09 compute-0 sudo[96932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjpmpjtqsaquexjikkhogjeucoqkwdt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763798829.4968355-564-249757181599288/AnsiballZ_edpm_container_manage.py'
Nov 22 08:07:09 compute-0 sudo[96932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:10 compute-0 python3[96934]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:07:10 compute-0 podman[96971]: 2025-11-22 08:07:10.401122204 +0000 UTC m=+0.055004274 container create 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 08:07:10 compute-0 podman[96971]: 2025-11-22 08:07:10.373924146 +0000 UTC m=+0.027806256 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 08:07:10 compute-0 python3[96934]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 08:07:10 compute-0 sudo[96932]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:10 compute-0 sudo[97158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbreiudvcrobdfxfkcltuxsyxjquqgwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798830.675306-572-151536094309419/AnsiballZ_stat.py'
Nov 22 08:07:10 compute-0 sudo[97158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:11 compute-0 python3.9[97160]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:07:11 compute-0 sudo[97158]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 08:07:11 compute-0 sudo[97312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxlecasgmownhocowoxwbprgmxbutbbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798831.3154955-581-199259178680276/AnsiballZ_file.py'
Nov 22 08:07:11 compute-0 sudo[97312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:11 compute-0 python3.9[97314]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:11 compute-0 sudo[97312]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:11 compute-0 sudo[97388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypvncalshfxqdcvotrncyivotnsxoudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798831.3154955-581-199259178680276/AnsiballZ_stat.py'
Nov 22 08:07:11 compute-0 sudo[97388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:12 compute-0 python3.9[97390]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:07:12 compute-0 sudo[97388]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:12 compute-0 sudo[97539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzvglfkmsmzvcjbvbdksvyjcqokutoss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798832.2136986-581-43359321657475/AnsiballZ_copy.py'
Nov 22 08:07:12 compute-0 sudo[97539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:12 compute-0 python3.9[97541]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763798832.2136986-581-43359321657475/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:12 compute-0 sudo[97539]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:13 compute-0 sudo[97615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrxstdbrfzbpnxkmcvcpmnwkvjexxnkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798832.2136986-581-43359321657475/AnsiballZ_systemd.py'
Nov 22 08:07:13 compute-0 sudo[97615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:13 compute-0 python3.9[97617]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:07:13 compute-0 systemd[1]: Reloading.
Nov 22 08:07:13 compute-0 systemd-rc-local-generator[97645]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:07:13 compute-0 systemd-sysv-generator[97650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:07:13 compute-0 sudo[97615]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:13 compute-0 sudo[97725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkgjkfdncpidrxikidzmorpdofzsbguw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798832.2136986-581-43359321657475/AnsiballZ_systemd.py'
Nov 22 08:07:13 compute-0 sudo[97725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:14 compute-0 python3.9[97727]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:07:14 compute-0 systemd[1]: Reloading.
Nov 22 08:07:14 compute-0 systemd-rc-local-generator[97752]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:07:14 compute-0 systemd-sysv-generator[97755]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:07:14 compute-0 systemd[1]: Starting ovn_controller container...
Nov 22 08:07:14 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 22 08:07:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affc5633028ee010d2b3d05c30ef84ce0dd60811014d85cc4802671846df44eb/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 22 08:07:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.
Nov 22 08:07:14 compute-0 podman[97768]: 2025-11-22 08:07:14.581373665 +0000 UTC m=+0.117321276 container init 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 08:07:14 compute-0 ovn_controller[97783]: + sudo -E kolla_set_configs
Nov 22 08:07:14 compute-0 podman[97768]: 2025-11-22 08:07:14.603906746 +0000 UTC m=+0.139854347 container start 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:07:14 compute-0 edpm-start-podman-container[97768]: ovn_controller
Nov 22 08:07:14 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 22 08:07:14 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 22 08:07:14 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 22 08:07:14 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 22 08:07:14 compute-0 edpm-start-podman-container[97767]: Creating additional drop-in dependency for "ovn_controller" (3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d)
Nov 22 08:07:14 compute-0 systemd[97822]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 22 08:07:14 compute-0 podman[97789]: 2025-11-22 08:07:14.688670036 +0000 UTC m=+0.070660558 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:07:14 compute-0 systemd[1]: 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d-5370e6d919eb9912.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:07:14 compute-0 systemd[1]: 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d-5370e6d919eb9912.service: Failed with result 'exit-code'.
Nov 22 08:07:14 compute-0 systemd[1]: Reloading.
Nov 22 08:07:14 compute-0 systemd-rc-local-generator[97869]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:07:14 compute-0 systemd-sysv-generator[97873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:07:14 compute-0 systemd[97822]: Queued start job for default target Main User Target.
Nov 22 08:07:14 compute-0 systemd[97822]: Created slice User Application Slice.
Nov 22 08:07:14 compute-0 systemd[97822]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 22 08:07:14 compute-0 systemd[97822]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 08:07:14 compute-0 systemd[97822]: Reached target Paths.
Nov 22 08:07:14 compute-0 systemd[97822]: Reached target Timers.
Nov 22 08:07:14 compute-0 systemd[97822]: Starting D-Bus User Message Bus Socket...
Nov 22 08:07:14 compute-0 systemd[97822]: Starting Create User's Volatile Files and Directories...
Nov 22 08:07:14 compute-0 systemd[97822]: Listening on D-Bus User Message Bus Socket.
Nov 22 08:07:14 compute-0 systemd[97822]: Reached target Sockets.
Nov 22 08:07:14 compute-0 systemd[97822]: Finished Create User's Volatile Files and Directories.
Nov 22 08:07:14 compute-0 systemd[97822]: Reached target Basic System.
Nov 22 08:07:14 compute-0 systemd[97822]: Reached target Main User Target.
Nov 22 08:07:14 compute-0 systemd[97822]: Startup finished in 141ms.
Nov 22 08:07:14 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 22 08:07:14 compute-0 systemd[1]: Started ovn_controller container.
Nov 22 08:07:14 compute-0 systemd[1]: Started Session c1 of User root.
Nov 22 08:07:14 compute-0 sudo[97725]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:14 compute-0 ovn_controller[97783]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:07:14 compute-0 ovn_controller[97783]: INFO:__main__:Validating config file
Nov 22 08:07:14 compute-0 ovn_controller[97783]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:07:14 compute-0 ovn_controller[97783]: INFO:__main__:Writing out command to execute
Nov 22 08:07:14 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 22 08:07:14 compute-0 ovn_controller[97783]: ++ cat /run_command
Nov 22 08:07:14 compute-0 ovn_controller[97783]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 08:07:14 compute-0 ovn_controller[97783]: + ARGS=
Nov 22 08:07:14 compute-0 ovn_controller[97783]: + sudo kolla_copy_cacerts
Nov 22 08:07:15 compute-0 systemd[1]: Started Session c2 of User root.
Nov 22 08:07:15 compute-0 ovn_controller[97783]: + [[ ! -n '' ]]
Nov 22 08:07:15 compute-0 ovn_controller[97783]: + . kolla_extend_start
Nov 22 08:07:15 compute-0 ovn_controller[97783]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 22 08:07:15 compute-0 ovn_controller[97783]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 08:07:15 compute-0 ovn_controller[97783]: + umask 0022
Nov 22 08:07:15 compute-0 ovn_controller[97783]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 22 08:07:15 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.0801] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.0808] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.0823] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.0827] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.0830] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 08:07:15 compute-0 kernel: br-int: entered promiscuous mode
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 08:07:15 compute-0 ovn_controller[97783]: 2025-11-22T08:07:15Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.1013] manager: (ovn-d3c6e6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 22 08:07:15 compute-0 systemd-udevd[97943]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:07:15 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 22 08:07:15 compute-0 systemd-udevd[97947]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.1174] device (genev_sys_6081): carrier: link connected
Nov 22 08:07:15 compute-0 NetworkManager[56326]: <info>  [1763798835.1177] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Nov 22 08:07:15 compute-0 sudo[98048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kavlidkxkppeimvbtypcizimqvvxxfcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798835.096411-609-164007746099605/AnsiballZ_command.py'
Nov 22 08:07:15 compute-0 sudo[98048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:15 compute-0 python3.9[98050]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:07:15 compute-0 ovs-vsctl[98051]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 22 08:07:15 compute-0 sudo[98048]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:15 compute-0 sudo[98201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eytkcsoqikktvrvdgsapkmrvevkwbadf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798835.6659875-617-210515159067392/AnsiballZ_command.py'
Nov 22 08:07:15 compute-0 sudo[98201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:16 compute-0 python3.9[98203]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:07:16 compute-0 ovs-vsctl[98205]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 22 08:07:16 compute-0 sudo[98201]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:16 compute-0 sudo[98356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zijnbjzndpxtavoqjbhsxoucqhsjfxts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798836.4986596-631-39985576937170/AnsiballZ_command.py'
Nov 22 08:07:16 compute-0 sudo[98356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:16 compute-0 python3.9[98358]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:07:16 compute-0 ovs-vsctl[98359]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 22 08:07:16 compute-0 sudo[98356]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:17 compute-0 sshd-session[87289]: Connection closed by 192.168.122.30 port 56258
Nov 22 08:07:17 compute-0 sshd-session[87286]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:07:17 compute-0 systemd-logind[826]: Session 20 logged out. Waiting for processes to exit.
Nov 22 08:07:17 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Nov 22 08:07:17 compute-0 systemd[1]: session-20.scope: Consumed 42.199s CPU time.
Nov 22 08:07:17 compute-0 systemd-logind[826]: Removed session 20.
Nov 22 08:07:22 compute-0 sshd-session[98384]: Accepted publickey for zuul from 192.168.122.30 port 49692 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:07:22 compute-0 systemd-logind[826]: New session 22 of user zuul.
Nov 22 08:07:22 compute-0 systemd[1]: Started Session 22 of User zuul.
Nov 22 08:07:22 compute-0 sshd-session[98384]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:07:24 compute-0 python3.9[98537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:07:24 compute-0 sudo[98691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhjhtseoywneyrvuegrvbyiemfbvkmfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798844.529291-34-64548017568920/AnsiballZ_file.py'
Nov 22 08:07:24 compute-0 sudo[98691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:25 compute-0 python3.9[98693]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:25 compute-0 sudo[98691]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:25 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 22 08:07:25 compute-0 systemd[97822]: Activating special unit Exit the Session...
Nov 22 08:07:25 compute-0 systemd[97822]: Stopped target Main User Target.
Nov 22 08:07:25 compute-0 systemd[97822]: Stopped target Basic System.
Nov 22 08:07:25 compute-0 systemd[97822]: Stopped target Paths.
Nov 22 08:07:25 compute-0 systemd[97822]: Stopped target Sockets.
Nov 22 08:07:25 compute-0 systemd[97822]: Stopped target Timers.
Nov 22 08:07:25 compute-0 systemd[97822]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 08:07:25 compute-0 systemd[97822]: Closed D-Bus User Message Bus Socket.
Nov 22 08:07:25 compute-0 systemd[97822]: Stopped Create User's Volatile Files and Directories.
Nov 22 08:07:25 compute-0 systemd[97822]: Removed slice User Application Slice.
Nov 22 08:07:25 compute-0 systemd[97822]: Reached target Shutdown.
Nov 22 08:07:25 compute-0 systemd[97822]: Finished Exit the Session.
Nov 22 08:07:25 compute-0 systemd[97822]: Reached target Exit the Session.
Nov 22 08:07:25 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 22 08:07:25 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 22 08:07:25 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 22 08:07:25 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 22 08:07:25 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 22 08:07:25 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 22 08:07:25 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 22 08:07:25 compute-0 sudo[98845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbnuonodshcuyskhjaotljazzipcdvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798845.2691703-34-235699578893074/AnsiballZ_file.py'
Nov 22 08:07:25 compute-0 sudo[98845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:25 compute-0 python3.9[98847]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:25 compute-0 sudo[98845]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:26 compute-0 sudo[98997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkyotrwbufobacwahhhrlmwxbhxjxmck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798845.8410292-34-233330720374501/AnsiballZ_file.py'
Nov 22 08:07:26 compute-0 sudo[98997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:26 compute-0 python3.9[98999]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:26 compute-0 sudo[98997]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:26 compute-0 sudo[99149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-livthdupsyqjiuzqwwnxueanttpdykkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798846.4336925-34-229372913560727/AnsiballZ_file.py'
Nov 22 08:07:26 compute-0 sudo[99149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:26 compute-0 python3.9[99151]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:26 compute-0 sudo[99149]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:27 compute-0 sudo[99301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adodkmyirehtrlxzpqjntbgejvvahpfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798847.0271337-34-65980154878075/AnsiballZ_file.py'
Nov 22 08:07:27 compute-0 sudo[99301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:27 compute-0 python3.9[99303]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:27 compute-0 sudo[99301]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:28 compute-0 python3.9[99453]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:07:28 compute-0 sudo[99603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nctndypovfaizbkwquxivfmkvsrejnzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798848.4603019-78-203824826361816/AnsiballZ_seboolean.py'
Nov 22 08:07:28 compute-0 sudo[99603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:29 compute-0 python3.9[99605]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 08:07:29 compute-0 sudo[99603]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:30 compute-0 python3.9[99755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:31 compute-0 python3.9[99876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798850.0111632-86-72042887968694/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:31 compute-0 python3.9[100026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:32 compute-0 python3.9[100147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798851.3730335-101-134629724737689/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:33 compute-0 sudo[100298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfnjhyiigjcdjwzyunihhhsvoobjhxbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798852.7372174-118-64925502379081/AnsiballZ_setup.py'
Nov 22 08:07:33 compute-0 sudo[100298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:33 compute-0 python3.9[100300]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:07:33 compute-0 sudo[100298]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:34 compute-0 sudo[100382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kassaewndjlbpbuahvafzrgkerpoeqbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798852.7372174-118-64925502379081/AnsiballZ_dnf.py'
Nov 22 08:07:34 compute-0 sudo[100382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:34 compute-0 python3.9[100384]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:07:35 compute-0 sudo[100382]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:36 compute-0 sudo[100535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qivemaylupzcpeoqobbkvrstsxqrtqey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798855.8221595-130-90866722579305/AnsiballZ_systemd.py'
Nov 22 08:07:36 compute-0 sudo[100535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:36 compute-0 python3.9[100537]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:07:36 compute-0 sudo[100535]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:37 compute-0 python3.9[100690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:38 compute-0 python3.9[100811]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798856.9209704-138-26460888030544/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:38 compute-0 python3.9[100961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:39 compute-0 python3.9[101082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798858.1404774-138-35139425234611/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:40 compute-0 python3.9[101232]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:40 compute-0 python3.9[101353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798859.789841-182-124071832929720/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:41 compute-0 python3.9[101503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:41 compute-0 python3.9[101624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798860.8413815-182-40248993470101/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:42 compute-0 python3.9[101774]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:07:43 compute-0 sudo[101926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffuopxlkqhcnniujswzdukhkrxuokxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798862.7673552-220-87801500118715/AnsiballZ_file.py'
Nov 22 08:07:43 compute-0 sudo[101926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:43 compute-0 python3.9[101928]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:43 compute-0 sudo[101926]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:43 compute-0 sudo[102078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nomqbknyijipnjinuglthonsgfjodtrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798863.3823347-228-217274139444543/AnsiballZ_stat.py'
Nov 22 08:07:43 compute-0 sudo[102078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:43 compute-0 python3.9[102080]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:43 compute-0 sudo[102078]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:44 compute-0 sudo[102156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xprqsenkdjzprdzmonpykopeztndqrnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798863.3823347-228-217274139444543/AnsiballZ_file.py'
Nov 22 08:07:44 compute-0 sudo[102156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:44 compute-0 python3.9[102158]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:44 compute-0 sudo[102156]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:44 compute-0 sudo[102308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xelbppoatleefvxiizlbcpgrdlyowxna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798864.4602637-228-128973706606350/AnsiballZ_stat.py'
Nov 22 08:07:44 compute-0 sudo[102308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:44 compute-0 ovn_controller[97783]: 2025-11-22T08:07:44Z|00025|memory|INFO|16128 kB peak resident set size after 29.8 seconds
Nov 22 08:07:44 compute-0 ovn_controller[97783]: 2025-11-22T08:07:44Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 22 08:07:44 compute-0 podman[102310]: 2025-11-22 08:07:44.87801967 +0000 UTC m=+0.115982598 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:07:44 compute-0 python3.9[102311]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:44 compute-0 sudo[102308]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:45 compute-0 sudo[102412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azqpzltrmkqianfserftnyovzizlzqdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798864.4602637-228-128973706606350/AnsiballZ_file.py'
Nov 22 08:07:45 compute-0 sudo[102412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:45 compute-0 python3.9[102414]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:45 compute-0 sudo[102412]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:45 compute-0 sudo[102564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brprrfpguueqirfmdmswogogxuchllpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798865.6136017-251-105122991127642/AnsiballZ_file.py'
Nov 22 08:07:45 compute-0 sudo[102564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:46 compute-0 python3.9[102566]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:46 compute-0 sudo[102564]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:46 compute-0 sudo[102716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhyzilkqyltktffyckegvkugbstvswxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798866.2401762-259-69032304721250/AnsiballZ_stat.py'
Nov 22 08:07:46 compute-0 sudo[102716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:46 compute-0 python3.9[102718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:46 compute-0 sudo[102716]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:46 compute-0 sudo[102794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crlinnlhaclqvzbnszqluudljhqtvzjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798866.2401762-259-69032304721250/AnsiballZ_file.py'
Nov 22 08:07:46 compute-0 sudo[102794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:47 compute-0 python3.9[102796]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:47 compute-0 sudo[102794]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:47 compute-0 sudo[102946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gylpkuvvzlbmkvbigirtdeagsivjlxdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798867.4102387-271-231033854456956/AnsiballZ_stat.py'
Nov 22 08:07:47 compute-0 sudo[102946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:47 compute-0 python3.9[102948]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:47 compute-0 sudo[102946]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:48 compute-0 sudo[103024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htkilokzmyazcnahjueysxceymrjtosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798867.4102387-271-231033854456956/AnsiballZ_file.py'
Nov 22 08:07:48 compute-0 sudo[103024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:48 compute-0 python3.9[103026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:48 compute-0 sudo[103024]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:48 compute-0 sudo[103176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reezhgedwsdvbpxgjreqzyerfijlgjlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798868.4361818-283-226390076019402/AnsiballZ_systemd.py'
Nov 22 08:07:48 compute-0 sudo[103176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:49 compute-0 python3.9[103178]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:07:49 compute-0 systemd[1]: Reloading.
Nov 22 08:07:49 compute-0 systemd-sysv-generator[103213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:07:49 compute-0 systemd-rc-local-generator[103208]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:07:49 compute-0 sudo[103176]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:49 compute-0 sudo[103366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crhmthujughvehtedxfhyjlcodhtwtme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798869.4770486-291-125365921175960/AnsiballZ_stat.py'
Nov 22 08:07:49 compute-0 sudo[103366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:49 compute-0 python3.9[103368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:49 compute-0 sudo[103366]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:50 compute-0 sudo[103444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfcmelbfmbigcaehebkhwtomhmwlsunk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798869.4770486-291-125365921175960/AnsiballZ_file.py'
Nov 22 08:07:50 compute-0 sudo[103444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:50 compute-0 python3.9[103446]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:50 compute-0 sudo[103444]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:50 compute-0 sudo[103596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmrlmejzwrbaucqyhwfuqbrmqorabkem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798870.509267-303-257825891490549/AnsiballZ_stat.py'
Nov 22 08:07:50 compute-0 sudo[103596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:51 compute-0 python3.9[103598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:51 compute-0 sudo[103596]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:51 compute-0 sudo[103674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-advhqdqqyyarlavbiipkasculpdfehyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798870.509267-303-257825891490549/AnsiballZ_file.py'
Nov 22 08:07:51 compute-0 sudo[103674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:51 compute-0 python3.9[103676]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:51 compute-0 sudo[103674]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:51 compute-0 sudo[103826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aizuvnfdaixglhyauzivpcuathyspdki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798871.6885684-315-101253736157211/AnsiballZ_systemd.py'
Nov 22 08:07:51 compute-0 sudo[103826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:52 compute-0 python3.9[103828]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:07:52 compute-0 systemd[1]: Reloading.
Nov 22 08:07:52 compute-0 systemd-sysv-generator[103857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:07:52 compute-0 systemd-rc-local-generator[103853]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:07:52 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 08:07:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 08:07:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 08:07:52 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 08:07:52 compute-0 sudo[103826]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:53 compute-0 sudo[104020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejkvsjdxqwwngjkxkzzyanztblpxpgqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798872.796636-325-112568441978929/AnsiballZ_file.py'
Nov 22 08:07:53 compute-0 sudo[104020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:53 compute-0 python3.9[104022]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:53 compute-0 sudo[104020]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:53 compute-0 sudo[104172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihbnfdmsapqlbioylzfvfuggdmjicmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798873.4354289-333-175214146085317/AnsiballZ_stat.py'
Nov 22 08:07:53 compute-0 sudo[104172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:53 compute-0 python3.9[104174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:53 compute-0 sudo[104172]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:54 compute-0 sudo[104295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keesohdyswspxblnsjfrzwpiesziyfas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798873.4354289-333-175214146085317/AnsiballZ_copy.py'
Nov 22 08:07:54 compute-0 sudo[104295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:54 compute-0 python3.9[104297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763798873.4354289-333-175214146085317/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:54 compute-0 sudo[104295]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:54 compute-0 sudo[104447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tieictiputikmmmchvwixzgeieteomyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798874.7214806-350-241232591473371/AnsiballZ_file.py'
Nov 22 08:07:54 compute-0 sudo[104447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:55 compute-0 python3.9[104449]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:07:55 compute-0 sudo[104447]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:55 compute-0 sudo[104599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tludgxczfqjftmhsbazcjqjwygwefjnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798875.5272906-358-10941340102138/AnsiballZ_stat.py'
Nov 22 08:07:55 compute-0 sudo[104599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:56 compute-0 python3.9[104601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:07:56 compute-0 sudo[104599]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:56 compute-0 sudo[104722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkvoujbwwscqvpknknkdetcmkkqbrvrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798875.5272906-358-10941340102138/AnsiballZ_copy.py'
Nov 22 08:07:56 compute-0 sudo[104722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:56 compute-0 python3.9[104724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798875.5272906-358-10941340102138/.source.json _original_basename=.ws3jzipw follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:56 compute-0 sudo[104722]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:57 compute-0 sudo[104874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfrmsmnknstwmvdvjakzqraeogttramj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798876.875937-373-19709152998941/AnsiballZ_file.py'
Nov 22 08:07:57 compute-0 sudo[104874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:57 compute-0 python3.9[104876]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:07:57 compute-0 sudo[104874]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:57 compute-0 sudo[105026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezbixonzrhveaehjswghkknsqiseracx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798877.570651-381-72371575203427/AnsiballZ_stat.py'
Nov 22 08:07:57 compute-0 sudo[105026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:58 compute-0 sudo[105026]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:58 compute-0 sudo[105149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhnzlnxudjdinsjtrmluowphretxwrof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798877.570651-381-72371575203427/AnsiballZ_copy.py'
Nov 22 08:07:58 compute-0 sudo[105149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:58 compute-0 sudo[105149]: pam_unix(sudo:session): session closed for user root
Nov 22 08:07:59 compute-0 sudo[105301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykdgtlnuorphfkfyiijyiwkczyynisw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798878.9029033-398-146989946505479/AnsiballZ_container_config_data.py'
Nov 22 08:07:59 compute-0 sudo[105301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:07:59 compute-0 python3.9[105303]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 22 08:07:59 compute-0 sudo[105301]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:00 compute-0 sudo[105453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pehshsagyxdstpcjziqkpgrjiqozajid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798879.7389712-407-140428971346689/AnsiballZ_container_config_hash.py'
Nov 22 08:08:00 compute-0 sudo[105453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:00 compute-0 python3.9[105455]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:08:00 compute-0 sudo[105453]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:01 compute-0 sudo[105605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skpsqzrthwosdfmgllbqfnypogvsshqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798880.6540892-416-196084821901197/AnsiballZ_podman_container_info.py'
Nov 22 08:08:01 compute-0 sudo[105605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:01 compute-0 python3.9[105607]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 08:08:02 compute-0 sudo[105605]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:02 compute-0 sudo[105784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jybzghiyfjyrkzvzbqjjucyrbqfavntc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763798882.5028572-429-255600688873886/AnsiballZ_edpm_container_manage.py'
Nov 22 08:08:02 compute-0 sudo[105784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:03 compute-0 python3[105786]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:08:03 compute-0 podman[105821]: 2025-11-22 08:08:03.415097826 +0000 UTC m=+0.054534411 container create b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:08:03 compute-0 podman[105821]: 2025-11-22 08:08:03.384127355 +0000 UTC m=+0.023563990 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:08:03 compute-0 python3[105786]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:08:03 compute-0 sudo[105784]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:03 compute-0 sudo[106009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfheovwjibhzsmnhzdnxwvsngaqrulyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798883.7334046-437-130903618394755/AnsiballZ_stat.py'
Nov 22 08:08:03 compute-0 sudo[106009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:04 compute-0 python3.9[106011]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:08:04 compute-0 sudo[106009]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:04 compute-0 sudo[106163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxdufwtprwvylofoqjnzxwkdmfjpsmpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798884.5993595-446-26391123310307/AnsiballZ_file.py'
Nov 22 08:08:04 compute-0 sudo[106163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:05 compute-0 python3.9[106165]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:05 compute-0 sudo[106163]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:05 compute-0 sudo[106239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onduviwvtjkvnnisxiffuvrubztensrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798884.5993595-446-26391123310307/AnsiballZ_stat.py'
Nov 22 08:08:05 compute-0 sudo[106239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:05 compute-0 python3.9[106241]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:08:05 compute-0 sudo[106239]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:05 compute-0 sudo[106390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riwvgzrovzivhexcxyhloainrfzlfvxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798885.5796926-446-52353302656326/AnsiballZ_copy.py'
Nov 22 08:08:06 compute-0 sudo[106390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:06 compute-0 python3.9[106392]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763798885.5796926-446-52353302656326/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:06 compute-0 sudo[106390]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:06 compute-0 sudo[106466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywyjnvisgvwulhkablvhovvkpohevrud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798885.5796926-446-52353302656326/AnsiballZ_systemd.py'
Nov 22 08:08:06 compute-0 sudo[106466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:06 compute-0 python3.9[106468]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:08:06 compute-0 systemd[1]: Reloading.
Nov 22 08:08:06 compute-0 systemd-sysv-generator[106498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:08:06 compute-0 systemd-rc-local-generator[106495]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:08:06 compute-0 sudo[106466]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:07 compute-0 sudo[106576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwcefesifcvotihmvcgkwlggtiithvdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798885.5796926-446-52353302656326/AnsiballZ_systemd.py'
Nov 22 08:08:07 compute-0 sudo[106576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:07 compute-0 python3.9[106578]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:07 compute-0 systemd[1]: Reloading.
Nov 22 08:08:07 compute-0 systemd-sysv-generator[106614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:08:07 compute-0 systemd-rc-local-generator[106610]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:08:07 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 22 08:08:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d331a711d411a35645a9c9e27e388d53a6d76ae3c04a8e49f02983ffa1ff5cb8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 22 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d331a711d411a35645a9c9e27e388d53a6d76ae3c04a8e49f02983ffa1ff5cb8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:08:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.
Nov 22 08:08:07 compute-0 podman[106621]: 2025-11-22 08:08:07.953966669 +0000 UTC m=+0.131657914 container init b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 08:08:07 compute-0 ovn_metadata_agent[106637]: + sudo -E kolla_set_configs
Nov 22 08:08:07 compute-0 podman[106621]: 2025-11-22 08:08:07.9790717 +0000 UTC m=+0.156762955 container start b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:08:07 compute-0 edpm-start-podman-container[106621]: ovn_metadata_agent
Nov 22 08:08:08 compute-0 edpm-start-podman-container[106620]: Creating additional drop-in dependency for "ovn_metadata_agent" (b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4)
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Validating config file
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Copying service configuration files
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Writing out command to execute
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 22 08:08:08 compute-0 systemd[1]: Reloading.
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: ++ cat /run_command
Nov 22 08:08:08 compute-0 podman[106644]: 2025-11-22 08:08:08.043483098 +0000 UTC m=+0.053020760 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + CMD=neutron-ovn-metadata-agent
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + ARGS=
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + sudo kolla_copy_cacerts
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + [[ ! -n '' ]]
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + . kolla_extend_start
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: Running command: 'neutron-ovn-metadata-agent'
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + umask 0022
Nov 22 08:08:08 compute-0 ovn_metadata_agent[106637]: + exec neutron-ovn-metadata-agent
Nov 22 08:08:08 compute-0 systemd-rc-local-generator[106711]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:08:08 compute-0 systemd-sysv-generator[106715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:08:08 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 22 08:08:08 compute-0 sudo[106576]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:08 compute-0 sshd-session[98387]: Connection closed by 192.168.122.30 port 49692
Nov 22 08:08:08 compute-0 sshd-session[98384]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:08:08 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Nov 22 08:08:08 compute-0 systemd[1]: session-22.scope: Consumed 33.246s CPU time.
Nov 22 08:08:08 compute-0 systemd-logind[826]: Session 22 logged out. Waiting for processes to exit.
Nov 22 08:08:08 compute-0 systemd-logind[826]: Removed session 22.
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.895 106642 INFO neutron.common.config [-] Logging enabled!
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.895 106642 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.895 106642 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.895 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.896 106642 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.897 106642 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.897 106642 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.897 106642 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.897 106642 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.897 106642 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.897 106642 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.897 106642 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.898 106642 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.899 106642 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.900 106642 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.901 106642 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.902 106642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.903 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.904 106642 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.905 106642 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.906 106642 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.907 106642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.907 106642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.907 106642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.907 106642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.907 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.907 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.907 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.908 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.909 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.910 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.910 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.910 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.910 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.910 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.910 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.910 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.911 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.912 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.913 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.914 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.915 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.916 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.916 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.916 106642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.916 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.916 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.916 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.917 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.917 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.917 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.917 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.917 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.917 106642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.917 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.918 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.918 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.918 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.918 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.918 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.918 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.918 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.919 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.920 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.921 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.922 106642 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.923 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.924 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.925 106642 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.926 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.927 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.928 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.929 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.930 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.930 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.930 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.930 106642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.930 106642 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.939 106642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.939 106642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.939 106642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.939 106642 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.939 106642 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.952 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e5f17f07-bc92-4131-bf96-5df2839ca4b0 (UUID: e5f17f07-bc92-4131-bf96-5df2839ca4b0) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.979 106642 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.980 106642 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.980 106642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.980 106642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.983 106642 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.988 106642 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.994 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e5f17f07-bc92-4131-bf96-5df2839ca4b0'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], external_ids={}, name=e5f17f07-bc92-4131-bf96-5df2839ca4b0, nb_cfg_timestamp=1763798843101, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.995 106642 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f92b43eddc0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.995 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.995 106642 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.996 106642 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:08:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.996 106642 INFO oslo_service.service [-] Starting 1 workers
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:09.999 106642 DEBUG oslo_service.service [-] Started child 106749 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.003 106642 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpc52ohuh1/privsep.sock']
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.003 106749 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-363508'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.027 106749 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.027 106749 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.027 106749 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.031 106749 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.037 106749 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.043 106749 INFO eventlet.wsgi.server [-] (106749) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 22 08:08:10 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.665 106642 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.666 106642 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpc52ohuh1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.547 106754 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.551 106754 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.553 106754 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.553 106754 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106754
Nov 22 08:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:10.668 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[8185bf95-4f84-4fe7-be1f-22370aeb3d9b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.205 106754 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.205 106754 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.205 106754 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.803 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0b75e8-a8e5-4015-b496-c39cc1fb919a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.806 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, column=external_ids, values=({'neutron:ovn-metadata-id': 'f250e67b-07f3-5ac4-ad88-c4fc72212f53'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.814 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.820 106642 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.821 106642 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.822 106642 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.823 106642 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.824 106642 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.824 106642 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.824 106642 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.824 106642 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.824 106642 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.824 106642 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.824 106642 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.825 106642 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.825 106642 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.825 106642 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.825 106642 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.825 106642 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.825 106642 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.825 106642 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.826 106642 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.827 106642 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.828 106642 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.829 106642 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.830 106642 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.831 106642 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.832 106642 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.833 106642 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.834 106642 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.835 106642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.836 106642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.836 106642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.836 106642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.836 106642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.836 106642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.836 106642 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.836 106642 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.837 106642 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.838 106642 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.839 106642 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.840 106642 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.841 106642 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.842 106642 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.843 106642 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.844 106642 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.845 106642 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.846 106642 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.847 106642 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.848 106642 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.849 106642 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.850 106642 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.851 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.852 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.853 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.854 106642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.855 106642 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.855 106642 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.855 106642 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.855 106642 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:08:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:08:11.855 106642 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:08:13 compute-0 sshd-session[106759]: Accepted publickey for zuul from 192.168.122.30 port 38942 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:08:13 compute-0 systemd-logind[826]: New session 23 of user zuul.
Nov 22 08:08:13 compute-0 systemd[1]: Started Session 23 of User zuul.
Nov 22 08:08:13 compute-0 sshd-session[106759]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:08:14 compute-0 python3.9[106912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:08:15 compute-0 podman[106958]: 2025-11-22 08:08:15.164240125 +0000 UTC m=+0.119904424 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:08:15 compute-0 sudo[107092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqobxtbpldrkzvdmavhdyessyciauinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798895.0351052-34-281466837224039/AnsiballZ_command.py'
Nov 22 08:08:15 compute-0 sudo[107092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:15 compute-0 python3.9[107094]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:15 compute-0 sudo[107092]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:16 compute-0 sudo[107256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvrwwzzgbpyokrcntjebkhobzhkiomyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798895.968928-45-76393722122567/AnsiballZ_systemd_service.py'
Nov 22 08:08:16 compute-0 sudo[107256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:16 compute-0 python3.9[107258]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:08:16 compute-0 systemd[1]: Reloading.
Nov 22 08:08:17 compute-0 systemd-rc-local-generator[107286]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:08:17 compute-0 systemd-sysv-generator[107289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:08:17 compute-0 sudo[107256]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:17 compute-0 python3.9[107443]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:08:17 compute-0 network[107460]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:08:17 compute-0 network[107461]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:08:17 compute-0 network[107462]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:08:21 compute-0 sudo[107723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkqtifcgnkgkeowsqwotsboinixcymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798901.1106942-64-275098942125542/AnsiballZ_systemd_service.py'
Nov 22 08:08:21 compute-0 sudo[107723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:21 compute-0 python3.9[107725]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:21 compute-0 sudo[107723]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:21 compute-0 sshd-session[107566]: Invalid user loginuser from 80.94.92.164 port 42646
Nov 22 08:08:22 compute-0 sudo[107876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzumylphmixjjsyurvwqmwlfecmgwpvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798901.8515368-64-38433870528443/AnsiballZ_systemd_service.py'
Nov 22 08:08:22 compute-0 sudo[107876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:22 compute-0 sshd-session[107566]: Connection closed by invalid user loginuser 80.94.92.164 port 42646 [preauth]
Nov 22 08:08:22 compute-0 python3.9[107878]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:22 compute-0 sudo[107876]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:22 compute-0 sudo[108029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvctpwtjllcrfdzzlmqmoedvzgbhpku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798902.5580578-64-111120448815242/AnsiballZ_systemd_service.py'
Nov 22 08:08:22 compute-0 sudo[108029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:23 compute-0 python3.9[108031]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:23 compute-0 sudo[108029]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:23 compute-0 sudo[108182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipnqfpwyhuaortnnbzgbvzqylnsudilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798903.303047-64-251446701539754/AnsiballZ_systemd_service.py'
Nov 22 08:08:23 compute-0 sudo[108182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:23 compute-0 python3.9[108184]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:23 compute-0 sudo[108182]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:24 compute-0 sudo[108335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rinrnpwtahzsvohwubeipqwkdsxexqug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798904.010331-64-217364086386439/AnsiballZ_systemd_service.py'
Nov 22 08:08:24 compute-0 sudo[108335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:24 compute-0 python3.9[108337]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:24 compute-0 sudo[108335]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:24 compute-0 sudo[108488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlcbpddepcgnumhbqapcbawlqssvmltv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798904.7091546-64-233440970948086/AnsiballZ_systemd_service.py'
Nov 22 08:08:24 compute-0 sudo[108488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:25 compute-0 python3.9[108490]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:25 compute-0 sudo[108488]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:25 compute-0 sudo[108641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivsdfhfybszdnzrtfgjktopqnyoxmzjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798905.3822258-64-200839459403972/AnsiballZ_systemd_service.py'
Nov 22 08:08:25 compute-0 sudo[108641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:25 compute-0 python3.9[108643]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:08:26 compute-0 sudo[108641]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:26 compute-0 sudo[108794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzutxvlputhpggnpzocxeiqoahazlgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798906.2596123-116-225407292326167/AnsiballZ_file.py'
Nov 22 08:08:26 compute-0 sudo[108794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:26 compute-0 python3.9[108796]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:26 compute-0 sudo[108794]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:27 compute-0 sudo[108946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpcchjqfnpatkdcvciebptlvbjopmmjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798906.9673257-116-6465272208504/AnsiballZ_file.py'
Nov 22 08:08:27 compute-0 sudo[108946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:27 compute-0 python3.9[108948]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:27 compute-0 sudo[108946]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:27 compute-0 sudo[109098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgbmhxsqnkemlhsnywlikybqphrlgfcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798907.5318663-116-150493317286194/AnsiballZ_file.py'
Nov 22 08:08:27 compute-0 sudo[109098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:27 compute-0 python3.9[109100]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:27 compute-0 sudo[109098]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:28 compute-0 sudo[109250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffoktucxkuexpnybcxigmushwtxavcpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798908.061471-116-220923572628101/AnsiballZ_file.py'
Nov 22 08:08:28 compute-0 sudo[109250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:28 compute-0 python3.9[109252]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:28 compute-0 sudo[109250]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:28 compute-0 sudo[109402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdriaafkefuvxoyruacanhaxllsnlaxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798908.6291447-116-251803507246163/AnsiballZ_file.py'
Nov 22 08:08:28 compute-0 sudo[109402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:29 compute-0 python3.9[109404]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:29 compute-0 sudo[109402]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:29 compute-0 sudo[109554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-simqpaeqvpzvtdirdgimtssbjdljmlnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798909.2424233-116-123085329092191/AnsiballZ_file.py'
Nov 22 08:08:29 compute-0 sudo[109554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:29 compute-0 python3.9[109556]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:29 compute-0 sudo[109554]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:30 compute-0 sudo[109706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkyyarttqejicursnlhnpirifiwmzvaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798909.9135158-116-215869257177059/AnsiballZ_file.py'
Nov 22 08:08:30 compute-0 sudo[109706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:30 compute-0 python3.9[109708]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:30 compute-0 sudo[109706]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:30 compute-0 sudo[109858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udwgjuwaifgtgphtfjsrdrputkhhkhas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798910.5659215-166-233616956928483/AnsiballZ_file.py'
Nov 22 08:08:30 compute-0 sudo[109858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:31 compute-0 python3.9[109860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:31 compute-0 sudo[109858]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:31 compute-0 sudo[110010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bimyawjzqdlvqgxezgjzuderyhhifwgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798911.2787266-166-87266212094962/AnsiballZ_file.py'
Nov 22 08:08:31 compute-0 sudo[110010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:31 compute-0 python3.9[110012]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:31 compute-0 sudo[110010]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:32 compute-0 sudo[110162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uamzyjepptydywlzxxphepkgfrqmkgxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798911.7996645-166-3401167317356/AnsiballZ_file.py'
Nov 22 08:08:32 compute-0 sudo[110162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:32 compute-0 python3.9[110164]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:32 compute-0 sudo[110162]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:32 compute-0 sudo[110314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujfheiwqncttbjtsgkvhjgfpfzrwgebt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798912.3705187-166-94778616411144/AnsiballZ_file.py'
Nov 22 08:08:32 compute-0 sudo[110314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:32 compute-0 python3.9[110316]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:32 compute-0 sudo[110314]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:33 compute-0 sudo[110466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyearxiyffldahvxffqkyeigkzwoitqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798913.0460443-166-258773714786069/AnsiballZ_file.py'
Nov 22 08:08:33 compute-0 sudo[110466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:33 compute-0 python3.9[110468]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:33 compute-0 sudo[110466]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:33 compute-0 sudo[110618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdjspowcsrurvnhzkfmoawaqwhbtqigq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798913.6112604-166-43744553784076/AnsiballZ_file.py'
Nov 22 08:08:33 compute-0 sudo[110618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:34 compute-0 python3.9[110620]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:34 compute-0 sudo[110618]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:34 compute-0 sudo[110770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyuewvxgnherkjihfkjydpsirfedjjvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798914.2053628-166-186573397837133/AnsiballZ_file.py'
Nov 22 08:08:34 compute-0 sudo[110770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:34 compute-0 python3.9[110772]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:08:34 compute-0 sudo[110770]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:35 compute-0 sudo[110922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vibnslbsbbwafclkmwecxghaxorofbyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798914.8796995-217-183432948836874/AnsiballZ_command.py'
Nov 22 08:08:35 compute-0 sudo[110922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:35 compute-0 python3.9[110924]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:35 compute-0 sudo[110922]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:36 compute-0 python3.9[111076]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:08:36 compute-0 sudo[111226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqsodksibzbfrqqamzeqaikjvpxpqqfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798916.2991586-235-231050669077132/AnsiballZ_systemd_service.py'
Nov 22 08:08:36 compute-0 sudo[111226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:36 compute-0 python3.9[111228]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:08:36 compute-0 systemd[1]: Reloading.
Nov 22 08:08:36 compute-0 systemd-rc-local-generator[111255]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:08:36 compute-0 systemd-sysv-generator[111260]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:08:37 compute-0 sudo[111226]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:37 compute-0 sudo[111414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcipndkucqqowhxjnnjzgxqgyngxmlyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798917.2753904-243-20711552175569/AnsiballZ_command.py'
Nov 22 08:08:37 compute-0 sudo[111414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:37 compute-0 python3.9[111416]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:37 compute-0 sudo[111414]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:38 compute-0 sudo[111567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppesfhqsvvonbmgvrfltphjovvrrsatk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798917.926312-243-144509828616657/AnsiballZ_command.py'
Nov 22 08:08:38 compute-0 sudo[111567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:38 compute-0 python3.9[111569]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:38 compute-0 sudo[111567]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:38 compute-0 podman[111571]: 2025-11-22 08:08:38.475033942 +0000 UTC m=+0.067793189 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:08:38 compute-0 sudo[111740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcmofwsiyshirflzdcbqjtpofbafnqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798918.5448313-243-232883870535140/AnsiballZ_command.py'
Nov 22 08:08:38 compute-0 sudo[111740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:39 compute-0 python3.9[111742]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:39 compute-0 sudo[111740]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:39 compute-0 sudo[111893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdvnlulnqnxxbfygqhhmhmmmguehiin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798919.2006545-243-82610914061302/AnsiballZ_command.py'
Nov 22 08:08:39 compute-0 sudo[111893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:39 compute-0 python3.9[111895]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:39 compute-0 sudo[111893]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:40 compute-0 sudo[112046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uedjcrqnijqiaxrthvxxjgwkpefkwnlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798919.768217-243-239932148174966/AnsiballZ_command.py'
Nov 22 08:08:40 compute-0 sudo[112046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:40 compute-0 python3.9[112048]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:40 compute-0 sudo[112046]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:40 compute-0 sudo[112199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xghojxcufirhswubpaoijnadfwzwosmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798920.3242047-243-154300705975418/AnsiballZ_command.py'
Nov 22 08:08:40 compute-0 sudo[112199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:40 compute-0 python3.9[112201]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:40 compute-0 sudo[112199]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:41 compute-0 sudo[112352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-petveopmjnfwdcnvgsrzucrnzcdfaagc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798921.0105674-243-42302852111736/AnsiballZ_command.py'
Nov 22 08:08:41 compute-0 sudo[112352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:41 compute-0 python3.9[112354]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:08:41 compute-0 sudo[112352]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:42 compute-0 sudo[112505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkvsleavdstxcvxnmbwrpltyanmrxkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798921.8741183-297-243501990754811/AnsiballZ_getent.py'
Nov 22 08:08:42 compute-0 sudo[112505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:42 compute-0 python3.9[112507]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 22 08:08:42 compute-0 sudo[112505]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:43 compute-0 sudo[112658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvoralyeggpntlyiqyipmigrbfziywtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798922.7433805-305-33477808203025/AnsiballZ_group.py'
Nov 22 08:08:43 compute-0 sudo[112658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:43 compute-0 python3.9[112660]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 08:08:43 compute-0 groupadd[112661]: group added to /etc/group: name=libvirt, GID=42473
Nov 22 08:08:43 compute-0 groupadd[112661]: group added to /etc/gshadow: name=libvirt
Nov 22 08:08:43 compute-0 groupadd[112661]: new group: name=libvirt, GID=42473
Nov 22 08:08:43 compute-0 sudo[112658]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:44 compute-0 sudo[112816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmfsedyqvggomtcjhechvryssqdrmnoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798923.6314957-313-141571346787631/AnsiballZ_user.py'
Nov 22 08:08:44 compute-0 sudo[112816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:44 compute-0 python3.9[112818]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 08:08:44 compute-0 useradd[112820]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 08:08:44 compute-0 sudo[112816]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:45 compute-0 sudo[112976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbohloblllgevxbcypbawdulonohbymi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798924.787539-324-67744386719540/AnsiballZ_setup.py'
Nov 22 08:08:45 compute-0 sudo[112976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:45 compute-0 python3.9[112978]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:08:45 compute-0 sudo[112976]: pam_unix(sudo:session): session closed for user root
Nov 22 08:08:46 compute-0 sudo[113075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdtlrtiajjsnjxzdvquaribwcztvhwgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763798924.787539-324-67744386719540/AnsiballZ_dnf.py'
Nov 22 08:08:46 compute-0 sudo[113075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:08:46 compute-0 podman[113034]: 2025-11-22 08:08:46.091044591 +0000 UTC m=+0.077424958 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:08:46 compute-0 python3.9[113082]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:09:09 compute-0 podman[113280]: 2025-11-22 08:09:09.105962312 +0000 UTC m=+0.060790750 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 08:09:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:09:09.933 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:09:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:09:09.934 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:09:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:09:09.935 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:09:16 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Nov 22 08:09:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 08:09:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 08:09:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 08:09:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 08:09:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 08:09:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 08:09:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 08:09:17 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 22 08:09:17 compute-0 podman[113306]: 2025-11-22 08:09:17.199055231 +0000 UTC m=+0.127125554 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:09:26 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Nov 22 08:09:26 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 08:09:26 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 08:09:26 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 08:09:26 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 08:09:26 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 08:09:26 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 08:09:26 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 08:09:40 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 22 08:09:40 compute-0 podman[113340]: 2025-11-22 08:09:40.116944308 +0000 UTC m=+0.063208173 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 08:09:48 compute-0 podman[117793]: 2025-11-22 08:09:48.171341829 +0000 UTC m=+0.123532058 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:10:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:10:09.936 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:10:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:10:09.936 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:10:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:10:09.936 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:10:11 compute-0 podman[130165]: 2025-11-22 08:10:11.129093656 +0000 UTC m=+0.080117631 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:10:19 compute-0 podman[130198]: 2025-11-22 08:10:19.146850407 +0000 UTC m=+0.107333925 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 08:10:23 compute-0 kernel: SELinux:  Converting 2759 SID table entries...
Nov 22 08:10:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 08:10:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 08:10:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 08:10:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 08:10:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 08:10:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 08:10:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 08:10:24 compute-0 groupadd[130236]: group added to /etc/group: name=dnsmasq, GID=992
Nov 22 08:10:24 compute-0 groupadd[130236]: group added to /etc/gshadow: name=dnsmasq
Nov 22 08:10:24 compute-0 groupadd[130236]: new group: name=dnsmasq, GID=992
Nov 22 08:10:24 compute-0 useradd[130243]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 22 08:10:24 compute-0 dbus-broker-launch[816]: Noticed file-system modification, trigger reload.
Nov 22 08:10:24 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 22 08:10:24 compute-0 dbus-broker-launch[816]: Noticed file-system modification, trigger reload.
Nov 22 08:10:25 compute-0 groupadd[130256]: group added to /etc/group: name=clevis, GID=991
Nov 22 08:10:25 compute-0 groupadd[130256]: group added to /etc/gshadow: name=clevis
Nov 22 08:10:25 compute-0 groupadd[130256]: new group: name=clevis, GID=991
Nov 22 08:10:25 compute-0 useradd[130263]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 22 08:10:25 compute-0 usermod[130273]: add 'clevis' to group 'tss'
Nov 22 08:10:25 compute-0 usermod[130273]: add 'clevis' to shadow group 'tss'
Nov 22 08:10:28 compute-0 polkitd[43629]: Reloading rules
Nov 22 08:10:28 compute-0 polkitd[43629]: Collecting garbage unconditionally...
Nov 22 08:10:28 compute-0 polkitd[43629]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 08:10:28 compute-0 polkitd[43629]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 08:10:28 compute-0 polkitd[43629]: Finished loading, compiling and executing 3 rules
Nov 22 08:10:28 compute-0 polkitd[43629]: Reloading rules
Nov 22 08:10:28 compute-0 polkitd[43629]: Collecting garbage unconditionally...
Nov 22 08:10:28 compute-0 polkitd[43629]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 08:10:28 compute-0 polkitd[43629]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 08:10:28 compute-0 polkitd[43629]: Finished loading, compiling and executing 3 rules
Nov 22 08:10:30 compute-0 groupadd[130460]: group added to /etc/group: name=ceph, GID=167
Nov 22 08:10:30 compute-0 groupadd[130460]: group added to /etc/gshadow: name=ceph
Nov 22 08:10:30 compute-0 groupadd[130460]: new group: name=ceph, GID=167
Nov 22 08:10:30 compute-0 useradd[130466]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 22 08:10:33 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 22 08:10:33 compute-0 sshd[1014]: Received signal 15; terminating.
Nov 22 08:10:33 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 22 08:10:33 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 22 08:10:33 compute-0 systemd[1]: sshd.service: Consumed 1.467s CPU time, read 32.0K from disk, written 32.0K to disk.
Nov 22 08:10:33 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 22 08:10:33 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 22 08:10:33 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 08:10:33 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 08:10:33 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 08:10:33 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 22 08:10:33 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 22 08:10:33 compute-0 sshd[130985]: Server listening on 0.0.0.0 port 22.
Nov 22 08:10:33 compute-0 sshd[130985]: Server listening on :: port 22.
Nov 22 08:10:33 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 22 08:10:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 08:10:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 08:10:35 compute-0 systemd[1]: Reloading.
Nov 22 08:10:35 compute-0 systemd-rc-local-generator[131237]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:35 compute-0 systemd-sysv-generator[131245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 08:10:38 compute-0 sudo[113075]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:39 compute-0 sudo[135722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqjxmwadegejhvqajoiiblzryayxdfnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799038.9788797-336-84657757324075/AnsiballZ_systemd.py'
Nov 22 08:10:39 compute-0 sudo[135722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:39 compute-0 python3.9[135744]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:10:39 compute-0 systemd[1]: Reloading.
Nov 22 08:10:39 compute-0 systemd-sysv-generator[136182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:40 compute-0 systemd-rc-local-generator[136178]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:40 compute-0 sudo[135722]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:40 compute-0 sudo[137032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjoivgseelwffauswibnkqouggddhbsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799040.3511608-336-189502221803730/AnsiballZ_systemd.py'
Nov 22 08:10:40 compute-0 sudo[137032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:40 compute-0 python3.9[137054]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:10:40 compute-0 systemd[1]: Reloading.
Nov 22 08:10:41 compute-0 systemd-rc-local-generator[137537]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:41 compute-0 systemd-sysv-generator[137541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:41 compute-0 sudo[137032]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:41 compute-0 podman[137641]: 2025-11-22 08:10:41.414370093 +0000 UTC m=+0.081987681 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 08:10:41 compute-0 sudo[138189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irduvechfnwlbbbkwzjchsasuklvescr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799041.4787402-336-20084414823425/AnsiballZ_systemd.py'
Nov 22 08:10:41 compute-0 sudo[138189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:42 compute-0 python3.9[138212]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:10:42 compute-0 systemd[1]: Reloading.
Nov 22 08:10:42 compute-0 systemd-rc-local-generator[138704]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:42 compute-0 systemd-sysv-generator[138707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:42 compute-0 sudo[138189]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:42 compute-0 sudo[139541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmtoktypyzczlryomygzdwhizjxiphua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799042.4902194-336-146124046913606/AnsiballZ_systemd.py'
Nov 22 08:10:42 compute-0 sudo[139541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:43 compute-0 python3.9[139543]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:10:43 compute-0 systemd[1]: Reloading.
Nov 22 08:10:43 compute-0 systemd-sysv-generator[139834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:43 compute-0 systemd-rc-local-generator[139830]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:43 compute-0 sudo[139541]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:43 compute-0 sudo[140436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqnwjirplvurjpzwuuwohotfrftnstdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799043.5448322-365-266770563907984/AnsiballZ_systemd.py'
Nov 22 08:10:43 compute-0 sudo[140436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:44 compute-0 python3.9[140438]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:44 compute-0 systemd[1]: Reloading.
Nov 22 08:10:44 compute-0 systemd-rc-local-generator[140509]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:44 compute-0 systemd-sysv-generator[140512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:44 compute-0 sudo[140436]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 08:10:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 08:10:44 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.960s CPU time.
Nov 22 08:10:44 compute-0 systemd[1]: run-r29e578244129438fac1c7e652681f178.service: Deactivated successfully.
Nov 22 08:10:45 compute-0 sudo[140744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hutrfmljuzrewhrruznhipjoojabhkdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799044.7881396-365-147101980305215/AnsiballZ_systemd.py'
Nov 22 08:10:45 compute-0 sudo[140744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:45 compute-0 python3.9[140746]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:45 compute-0 systemd[1]: Reloading.
Nov 22 08:10:45 compute-0 systemd-sysv-generator[140780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:45 compute-0 systemd-rc-local-generator[140776]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:45 compute-0 sudo[140744]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:46 compute-0 sudo[140933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjyalidaquvikidmaasbnawpaazgcsdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799045.827476-365-6690453182086/AnsiballZ_systemd.py'
Nov 22 08:10:46 compute-0 sudo[140933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:46 compute-0 python3.9[140935]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:46 compute-0 systemd[1]: Reloading.
Nov 22 08:10:46 compute-0 systemd-rc-local-generator[140965]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:46 compute-0 systemd-sysv-generator[140968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:46 compute-0 sudo[140933]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:47 compute-0 sudo[141123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcbcykrfczlupzfynxjnemeyozqolwwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799046.8810885-365-257024949806143/AnsiballZ_systemd.py'
Nov 22 08:10:47 compute-0 sudo[141123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:47 compute-0 python3.9[141125]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:47 compute-0 sudo[141123]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:48 compute-0 sudo[141278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aalqihhiegybsfyknvfgycqeexspwmhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799047.7118938-365-116434634612495/AnsiballZ_systemd.py'
Nov 22 08:10:48 compute-0 sudo[141278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:48 compute-0 python3.9[141280]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:48 compute-0 systemd[1]: Reloading.
Nov 22 08:10:48 compute-0 systemd-rc-local-generator[141313]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:48 compute-0 systemd-sysv-generator[141318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:48 compute-0 sudo[141278]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:49 compute-0 sudo[141479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzxnqxzoailpfhetmpsrpmayzflzvpkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799049.33357-401-40506096929567/AnsiballZ_systemd.py'
Nov 22 08:10:49 compute-0 sudo[141479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:49 compute-0 podman[141443]: 2025-11-22 08:10:49.670793994 +0000 UTC m=+0.091313192 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 08:10:49 compute-0 python3.9[141489]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 08:10:49 compute-0 systemd[1]: Reloading.
Nov 22 08:10:50 compute-0 systemd-rc-local-generator[141527]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:10:50 compute-0 systemd-sysv-generator[141531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:10:50 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 22 08:10:50 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 22 08:10:50 compute-0 sudo[141479]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:50 compute-0 sudo[141686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsbapanfreohdmpodmsijctyvalqftjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799050.6556206-409-88777372237796/AnsiballZ_systemd.py'
Nov 22 08:10:50 compute-0 sudo[141686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:51 compute-0 python3.9[141688]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:51 compute-0 sudo[141686]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:51 compute-0 sudo[141841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjrtvnbpilrkeworwqldkvlmqhvtgsbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799051.4897006-409-191647184856990/AnsiballZ_systemd.py'
Nov 22 08:10:51 compute-0 sudo[141841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:52 compute-0 python3.9[141843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:52 compute-0 sudo[141841]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:52 compute-0 sudo[141996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwaqwnkzsokpabydobclqbqvskvfbckt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799052.3192213-409-268396888375418/AnsiballZ_systemd.py'
Nov 22 08:10:52 compute-0 sudo[141996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:52 compute-0 python3.9[141998]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:52 compute-0 sudo[141996]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:53 compute-0 sudo[142151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riilaprpxqzwgtuvbatqovjgnlauiwth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799053.0841775-409-66921662945281/AnsiballZ_systemd.py'
Nov 22 08:10:53 compute-0 sudo[142151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:53 compute-0 python3.9[142153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:53 compute-0 sudo[142151]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:54 compute-0 sudo[142306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fskhenzdxgwqsxskyvvccnuqdfuceacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799053.8817987-409-98524801678892/AnsiballZ_systemd.py'
Nov 22 08:10:54 compute-0 sudo[142306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:54 compute-0 python3.9[142308]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:54 compute-0 sudo[142306]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:55 compute-0 sudo[142461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzkpdafndwuhkjxagyjngwgomkmjlhtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799054.89816-409-145855398021591/AnsiballZ_systemd.py'
Nov 22 08:10:55 compute-0 sudo[142461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:55 compute-0 python3.9[142463]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:55 compute-0 sudo[142461]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:55 compute-0 sudo[142616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oziioyqartvqznxffwjolmzbzvnwcffo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799055.632439-409-237577126472753/AnsiballZ_systemd.py'
Nov 22 08:10:55 compute-0 sudo[142616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:56 compute-0 python3.9[142618]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:56 compute-0 sudo[142616]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:56 compute-0 sudo[142771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkiepusxxdvkqhqhflytsrgaavrhruez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799056.4678504-409-154733130207628/AnsiballZ_systemd.py'
Nov 22 08:10:56 compute-0 sudo[142771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:57 compute-0 python3.9[142773]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:57 compute-0 sudo[142771]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:57 compute-0 sudo[142926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhzbaargguiataeqsfidbrlrmcmgbikc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799057.2920632-409-124069526922123/AnsiballZ_systemd.py'
Nov 22 08:10:57 compute-0 sudo[142926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:57 compute-0 python3.9[142928]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:57 compute-0 sudo[142926]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:58 compute-0 sudo[143081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvygdtekjovoypzlcaymysrilxzjzqpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799058.137965-409-120609123288405/AnsiballZ_systemd.py'
Nov 22 08:10:58 compute-0 sudo[143081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:58 compute-0 python3.9[143083]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:58 compute-0 sudo[143081]: pam_unix(sudo:session): session closed for user root
Nov 22 08:10:59 compute-0 sudo[143236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylqodlszvsoldtorgxlaackbymkvzefi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799058.9703434-409-54721311934091/AnsiballZ_systemd.py'
Nov 22 08:10:59 compute-0 sudo[143236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:10:59 compute-0 python3.9[143238]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:10:59 compute-0 sudo[143236]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:00 compute-0 sudo[143391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkxssbktetxnhumraiavyyamivdabwuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799059.8464751-409-51316908400862/AnsiballZ_systemd.py'
Nov 22 08:11:00 compute-0 sudo[143391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:00 compute-0 python3.9[143393]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:11:00 compute-0 sudo[143391]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:00 compute-0 sudo[143546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gefphmjhrluxahoekhptexrqhxyzwjne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799060.6392467-409-100140692957/AnsiballZ_systemd.py'
Nov 22 08:11:00 compute-0 sudo[143546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:01 compute-0 python3.9[143548]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:11:01 compute-0 sudo[143546]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:01 compute-0 sudo[143701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyerqfvywmgncmaohlxkcmvzvzybjjwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799061.467012-409-244397550752340/AnsiballZ_systemd.py'
Nov 22 08:11:01 compute-0 sudo[143701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:02 compute-0 python3.9[143703]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 08:11:02 compute-0 sudo[143701]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:03 compute-0 sudo[143856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozazuloxhnjgspnkxsenannlxhjqxttz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799062.7987316-511-186059787893358/AnsiballZ_file.py'
Nov 22 08:11:03 compute-0 sudo[143856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:03 compute-0 python3.9[143858]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:11:03 compute-0 sudo[143856]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:03 compute-0 sudo[144008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ientvfwnydndeyligaahoygwuwkgihvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799063.40666-511-166452592840059/AnsiballZ_file.py'
Nov 22 08:11:03 compute-0 sudo[144008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:03 compute-0 python3.9[144010]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:11:03 compute-0 sudo[144008]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:04 compute-0 sudo[144160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqezzznligbvgjsmmwvfrdghivhmrzwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799063.9964852-511-25206858389707/AnsiballZ_file.py'
Nov 22 08:11:04 compute-0 sudo[144160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:04 compute-0 python3.9[144162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:11:04 compute-0 sudo[144160]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:04 compute-0 sudo[144312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpbcxtljmtfoediypfvldsaacjdlmclt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799064.5661032-511-168740144354162/AnsiballZ_file.py'
Nov 22 08:11:04 compute-0 sudo[144312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:04 compute-0 python3.9[144314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:11:05 compute-0 sudo[144312]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:05 compute-0 sudo[144464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-volchtlgpzgxkvhhzigwnyxlvajdwlhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799065.2883472-511-220918650819972/AnsiballZ_file.py'
Nov 22 08:11:05 compute-0 sudo[144464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:05 compute-0 python3.9[144466]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:11:05 compute-0 sudo[144464]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:06 compute-0 sudo[144616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxqdjndfuefwojlbiqohvjokhoqarsiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799066.0915682-511-86782350623749/AnsiballZ_file.py'
Nov 22 08:11:06 compute-0 sudo[144616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:06 compute-0 python3.9[144618]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:11:06 compute-0 sudo[144616]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:07 compute-0 sudo[144768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdcuqsjlcrcucxgpfpkfzqmewvwmtrmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799066.72808-554-59511449086847/AnsiballZ_stat.py'
Nov 22 08:11:07 compute-0 sudo[144768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:07 compute-0 python3.9[144770]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:07 compute-0 sudo[144768]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:07 compute-0 sudo[144893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abasxzagovsnimjitvhznjfimuptjemw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799066.72808-554-59511449086847/AnsiballZ_copy.py'
Nov 22 08:11:07 compute-0 sudo[144893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:08 compute-0 python3.9[144895]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799066.72808-554-59511449086847/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:08 compute-0 sudo[144893]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:08 compute-0 sudo[145045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iienablvifnkslvktzulomzghtshwoyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799068.2562044-554-8525645592282/AnsiballZ_stat.py'
Nov 22 08:11:08 compute-0 sudo[145045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:08 compute-0 python3.9[145047]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:08 compute-0 sudo[145045]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:09 compute-0 sudo[145170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxugfowcxwzvrnamadtrhmhotwesygmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799068.2562044-554-8525645592282/AnsiballZ_copy.py'
Nov 22 08:11:09 compute-0 sudo[145170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:09 compute-0 python3.9[145172]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799068.2562044-554-8525645592282/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:09 compute-0 sudo[145170]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:09 compute-0 sudo[145322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyzbetlsdkfqjdaeytisrqdtuvrjiwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799069.3997214-554-51189925604428/AnsiballZ_stat.py'
Nov 22 08:11:09 compute-0 sudo[145322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:09 compute-0 python3.9[145324]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:09 compute-0 sudo[145322]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:11:09.937 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:11:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:11:09.938 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:11:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:11:09.939 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:11:10 compute-0 sudo[145447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhmvnjyjnyuuuknxmctodedxeekfgxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799069.3997214-554-51189925604428/AnsiballZ_copy.py'
Nov 22 08:11:10 compute-0 sudo[145447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:10 compute-0 python3.9[145449]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799069.3997214-554-51189925604428/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:10 compute-0 sudo[145447]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:10 compute-0 sudo[145599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyugyhdbesqlsexetzwxgicpeenpkxxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799070.5165098-554-147639641018090/AnsiballZ_stat.py'
Nov 22 08:11:10 compute-0 sudo[145599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:10 compute-0 python3.9[145601]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:10 compute-0 sudo[145599]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:11 compute-0 sudo[145724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnpoezpmvwdnhoetjisjddmdawnzxzxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799070.5165098-554-147639641018090/AnsiballZ_copy.py'
Nov 22 08:11:11 compute-0 sudo[145724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:11 compute-0 python3.9[145726]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799070.5165098-554-147639641018090/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:11 compute-0 sudo[145724]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:11 compute-0 podman[145727]: 2025-11-22 08:11:11.570328091 +0000 UTC m=+0.053669337 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:11:11 compute-0 sudo[145893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvewvopkkadrjotpjlqwcykqsnbavtzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799071.659422-554-151077278561111/AnsiballZ_stat.py'
Nov 22 08:11:11 compute-0 sudo[145893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:12 compute-0 python3.9[145895]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:12 compute-0 sudo[145893]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:12 compute-0 sudo[146018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbrvllwiptwfpjvnsvsvysphocurutf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799071.659422-554-151077278561111/AnsiballZ_copy.py'
Nov 22 08:11:12 compute-0 sudo[146018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:12 compute-0 python3.9[146020]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799071.659422-554-151077278561111/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:12 compute-0 sudo[146018]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:13 compute-0 sudo[146170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bauexdlcxrtekuriaiszneivlgqqedoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799072.7764466-554-95850603202262/AnsiballZ_stat.py'
Nov 22 08:11:13 compute-0 sudo[146170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:13 compute-0 python3.9[146172]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:13 compute-0 sudo[146170]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:13 compute-0 sudo[146295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akebkpzgbxutecbhdwhduuwcgogrscha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799072.7764466-554-95850603202262/AnsiballZ_copy.py'
Nov 22 08:11:13 compute-0 sudo[146295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:13 compute-0 python3.9[146297]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799072.7764466-554-95850603202262/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:13 compute-0 sudo[146295]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:14 compute-0 sudo[146447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdijsqyosnwsxnkbbjidwrrmvjbrtqhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799073.9000902-554-278895562049419/AnsiballZ_stat.py'
Nov 22 08:11:14 compute-0 sudo[146447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:14 compute-0 python3.9[146449]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:14 compute-0 sudo[146447]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:14 compute-0 sudo[146570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvespkgitcgmpwlaeteddnoxgkvivuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799073.9000902-554-278895562049419/AnsiballZ_copy.py'
Nov 22 08:11:14 compute-0 sudo[146570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:14 compute-0 python3.9[146572]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799073.9000902-554-278895562049419/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:14 compute-0 sudo[146570]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:15 compute-0 sudo[146722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwkhbcdrclgeiirzhmhsfvxruynrcypj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799075.0380962-554-7777667803459/AnsiballZ_stat.py'
Nov 22 08:11:15 compute-0 sudo[146722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:15 compute-0 python3.9[146724]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:15 compute-0 sudo[146722]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:15 compute-0 sudo[146847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iziceijojqfqzlikvqsdtftnalclzurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799075.0380962-554-7777667803459/AnsiballZ_copy.py'
Nov 22 08:11:15 compute-0 sudo[146847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:16 compute-0 python3.9[146849]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763799075.0380962-554-7777667803459/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:16 compute-0 sudo[146847]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:16 compute-0 sudo[146999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrptnngtdedjpaeajznxgghmpipgieou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799076.2100694-667-130345965881854/AnsiballZ_command.py'
Nov 22 08:11:16 compute-0 sudo[146999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:16 compute-0 python3.9[147001]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 22 08:11:16 compute-0 sudo[146999]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:17 compute-0 sudo[147152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thrwswgvgjfblderobbkukvuquctrwwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799077.0714595-676-230716941610888/AnsiballZ_file.py'
Nov 22 08:11:17 compute-0 sudo[147152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:17 compute-0 python3.9[147154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:17 compute-0 sudo[147152]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:17 compute-0 sudo[147304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmhvdopehhedisgqftwuzuwloplejrev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799077.6968813-676-27432591602038/AnsiballZ_file.py'
Nov 22 08:11:17 compute-0 sudo[147304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:18 compute-0 python3.9[147306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:18 compute-0 sudo[147304]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:18 compute-0 sudo[147456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usamtvxtxbtvfkdlgutrugbnpvlrbdbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799078.3273275-676-1625483742750/AnsiballZ_file.py'
Nov 22 08:11:18 compute-0 sudo[147456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:18 compute-0 python3.9[147458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:18 compute-0 sudo[147456]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:19 compute-0 sudo[147608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuezthjexsoibtipkprokinsewqgspwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799078.9988713-676-256304723478365/AnsiballZ_file.py'
Nov 22 08:11:19 compute-0 sudo[147608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:19 compute-0 python3.9[147610]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:19 compute-0 sudo[147608]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:19 compute-0 sudo[147777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxclrswaybqcwhrerjrcqxjfnztvqspl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799079.6966527-676-149943690082708/AnsiballZ_file.py'
Nov 22 08:11:19 compute-0 sudo[147777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:19 compute-0 podman[147734]: 2025-11-22 08:11:19.989765986 +0000 UTC m=+0.075366771 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:11:20 compute-0 python3.9[147784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:20 compute-0 sudo[147777]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:20 compute-0 sudo[147938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtcuplaooptkwrszxnxjjpoarokxobxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799080.3548489-676-133620022728050/AnsiballZ_file.py'
Nov 22 08:11:20 compute-0 sudo[147938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:20 compute-0 python3.9[147940]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:20 compute-0 sudo[147938]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:21 compute-0 sudo[148090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egwjpxpncdciwfzwgxjjggpgcxvoqqkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799081.042628-676-186786137891846/AnsiballZ_file.py'
Nov 22 08:11:21 compute-0 sudo[148090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:21 compute-0 python3.9[148092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:21 compute-0 sudo[148090]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:22 compute-0 sudo[148242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkoeflnyyyvqupvgyxttuxzfycysemdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799081.780132-676-69652237665684/AnsiballZ_file.py'
Nov 22 08:11:22 compute-0 sudo[148242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:22 compute-0 python3.9[148244]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:22 compute-0 sudo[148242]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:22 compute-0 sudo[148394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyuiimgogrbaywkvmympilghctroickb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799082.3678293-676-199004485393132/AnsiballZ_file.py'
Nov 22 08:11:22 compute-0 sudo[148394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:22 compute-0 python3.9[148396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:22 compute-0 sudo[148394]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:23 compute-0 sudo[148546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmegnhwuaelwftwxawdqsidyeddpwzgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799082.9281797-676-124582383262517/AnsiballZ_file.py'
Nov 22 08:11:23 compute-0 sudo[148546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:23 compute-0 python3.9[148548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:23 compute-0 sudo[148546]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:23 compute-0 sudo[148698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzzijcoxrzoxeplkvaiiklhspgoqfmiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799083.5162055-676-18811269078780/AnsiballZ_file.py'
Nov 22 08:11:23 compute-0 sudo[148698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:23 compute-0 python3.9[148700]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:23 compute-0 sudo[148698]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:24 compute-0 sudo[148850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlaentodfcvuosxprafnfwiorowakmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799084.068899-676-156292296306910/AnsiballZ_file.py'
Nov 22 08:11:24 compute-0 sudo[148850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:24 compute-0 python3.9[148852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:24 compute-0 sudo[148850]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:24 compute-0 sudo[149002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rshigmumxhoadesnlxtrpmsnkuotccbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799084.6640315-676-126417506637643/AnsiballZ_file.py'
Nov 22 08:11:24 compute-0 sudo[149002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:25 compute-0 python3.9[149004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:25 compute-0 sudo[149002]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:25 compute-0 sudo[149154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyyyusjazwrrrazdesmrvphnbqkbaqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799085.250671-676-278893331744163/AnsiballZ_file.py'
Nov 22 08:11:25 compute-0 sudo[149154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:25 compute-0 python3.9[149156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:25 compute-0 sudo[149154]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:26 compute-0 sudo[149306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iucphtmaxxlgazeaelezjfjkhbpbhudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799085.9200277-775-247197459449074/AnsiballZ_stat.py'
Nov 22 08:11:26 compute-0 sudo[149306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:26 compute-0 python3.9[149308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:26 compute-0 sudo[149306]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:26 compute-0 sudo[149429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kulhedsklaaoajdtpqigvfzbtcrxgqpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799085.9200277-775-247197459449074/AnsiballZ_copy.py'
Nov 22 08:11:26 compute-0 sudo[149429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:26 compute-0 python3.9[149431]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799085.9200277-775-247197459449074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:27 compute-0 sudo[149429]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:27 compute-0 sudo[149581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncfepxpejxiumzvokdiygrgnelelwtsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799087.1572752-775-245562918378490/AnsiballZ_stat.py'
Nov 22 08:11:27 compute-0 sudo[149581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:27 compute-0 python3.9[149583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:27 compute-0 sudo[149581]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:27 compute-0 sudo[149704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riszwzixezfrudadjfsaviqjvxhcyyaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799087.1572752-775-245562918378490/AnsiballZ_copy.py'
Nov 22 08:11:27 compute-0 sudo[149704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:28 compute-0 python3.9[149706]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799087.1572752-775-245562918378490/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:28 compute-0 sudo[149704]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:28 compute-0 sudo[149856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnyupcepembclxhbzdpxgwzmlrxtzkrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799088.3868158-775-244218772938962/AnsiballZ_stat.py'
Nov 22 08:11:28 compute-0 sudo[149856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:28 compute-0 python3.9[149858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:28 compute-0 sudo[149856]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:29 compute-0 sudo[149979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxwzkjueukwbingxhdpscgogisdlbdfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799088.3868158-775-244218772938962/AnsiballZ_copy.py'
Nov 22 08:11:29 compute-0 sudo[149979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:29 compute-0 python3.9[149981]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799088.3868158-775-244218772938962/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:29 compute-0 sudo[149979]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:29 compute-0 sudo[150131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvrzklpyepicknjxxylmoioeropbchrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799089.599703-775-234875846624627/AnsiballZ_stat.py'
Nov 22 08:11:29 compute-0 sudo[150131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:30 compute-0 python3.9[150133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:30 compute-0 sudo[150131]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:30 compute-0 sudo[150254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhwoxmlnygralyuusltwrsrafsjpmyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799089.599703-775-234875846624627/AnsiballZ_copy.py'
Nov 22 08:11:30 compute-0 sudo[150254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:30 compute-0 python3.9[150256]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799089.599703-775-234875846624627/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:30 compute-0 sudo[150254]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:31 compute-0 sudo[150406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zryzuvghofibqqjgkvyotiivyqfwehsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799090.7713304-775-191393344822491/AnsiballZ_stat.py'
Nov 22 08:11:31 compute-0 sudo[150406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:31 compute-0 python3.9[150408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:31 compute-0 sudo[150406]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:31 compute-0 sudo[150529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asdlkazaesuopkxhecxtfduupoyaaetq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799090.7713304-775-191393344822491/AnsiballZ_copy.py'
Nov 22 08:11:31 compute-0 sudo[150529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:31 compute-0 python3.9[150531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799090.7713304-775-191393344822491/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:31 compute-0 sudo[150529]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:32 compute-0 sudo[150681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvjeenqszeqofvubnceiwtktamtgxslk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799091.8442981-775-264750606117026/AnsiballZ_stat.py'
Nov 22 08:11:32 compute-0 sudo[150681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:32 compute-0 python3.9[150683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:32 compute-0 sudo[150681]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:32 compute-0 sudo[150804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlekyseeagfdnvwmqyrktzdmhmhoeynj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799091.8442981-775-264750606117026/AnsiballZ_copy.py'
Nov 22 08:11:32 compute-0 sudo[150804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:32 compute-0 python3.9[150806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799091.8442981-775-264750606117026/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:32 compute-0 sudo[150804]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:33 compute-0 sudo[150956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzkvlgwewskoaldkkcauqmujjdammes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799092.9208033-775-258289041894009/AnsiballZ_stat.py'
Nov 22 08:11:33 compute-0 sudo[150956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:33 compute-0 python3.9[150958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:33 compute-0 sudo[150956]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:33 compute-0 sudo[151079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deydybdwjwwresdqxtmxvmvwcwdvdtkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799092.9208033-775-258289041894009/AnsiballZ_copy.py'
Nov 22 08:11:33 compute-0 sudo[151079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:33 compute-0 python3.9[151081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799092.9208033-775-258289041894009/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:33 compute-0 sudo[151079]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:34 compute-0 sudo[151231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emuigvgxfsavvjkiejguonbbozbginpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799094.01208-775-175270888464470/AnsiballZ_stat.py'
Nov 22 08:11:34 compute-0 sudo[151231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:34 compute-0 python3.9[151233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:34 compute-0 sudo[151231]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:34 compute-0 sudo[151354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovcwjvinuaodwlmkhuxbnjcmgnpxcpwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799094.01208-775-175270888464470/AnsiballZ_copy.py'
Nov 22 08:11:34 compute-0 sudo[151354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:34 compute-0 python3.9[151356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799094.01208-775-175270888464470/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:34 compute-0 sudo[151354]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:35 compute-0 sudo[151506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukclounkdwgeocsxdbumnpllawadewym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799095.0773332-775-49965174524297/AnsiballZ_stat.py'
Nov 22 08:11:35 compute-0 sudo[151506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:35 compute-0 python3.9[151508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:35 compute-0 sudo[151506]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:35 compute-0 sudo[151629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlqaexywmqeuoabrcrdmflmhoeflsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799095.0773332-775-49965174524297/AnsiballZ_copy.py'
Nov 22 08:11:35 compute-0 sudo[151629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:36 compute-0 python3.9[151631]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799095.0773332-775-49965174524297/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:36 compute-0 sudo[151629]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:36 compute-0 sudo[151781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muwbkbohjteqquscfbbslmlntgavklza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799096.1967766-775-215187770224069/AnsiballZ_stat.py'
Nov 22 08:11:36 compute-0 sudo[151781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:36 compute-0 python3.9[151783]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:36 compute-0 sudo[151781]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:37 compute-0 sudo[151904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjzukjtdymgqchaahmamvnsxtehptuka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799096.1967766-775-215187770224069/AnsiballZ_copy.py'
Nov 22 08:11:37 compute-0 sudo[151904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:37 compute-0 python3.9[151906]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799096.1967766-775-215187770224069/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:37 compute-0 sudo[151904]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:37 compute-0 sudo[152056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpnsamevisvkucjnlhqqmutylzaucibj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799097.3283184-775-59338470148349/AnsiballZ_stat.py'
Nov 22 08:11:37 compute-0 sudo[152056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:37 compute-0 python3.9[152058]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:37 compute-0 sudo[152056]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:38 compute-0 sudo[152179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eshhvsfgklkzhwnqgrkzifaulkypcocm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799097.3283184-775-59338470148349/AnsiballZ_copy.py'
Nov 22 08:11:38 compute-0 sudo[152179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:38 compute-0 python3.9[152181]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799097.3283184-775-59338470148349/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:38 compute-0 sudo[152179]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:38 compute-0 sudo[152331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpqrdooqmogingoenywnyedauwqbblis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799098.4121904-775-185866562196382/AnsiballZ_stat.py'
Nov 22 08:11:38 compute-0 sudo[152331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:38 compute-0 python3.9[152333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:38 compute-0 sudo[152331]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:39 compute-0 sudo[152454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedqkygrjxfewozvjmwblbutmtrwcfgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799098.4121904-775-185866562196382/AnsiballZ_copy.py'
Nov 22 08:11:39 compute-0 sudo[152454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:39 compute-0 python3.9[152456]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799098.4121904-775-185866562196382/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:39 compute-0 sudo[152454]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:39 compute-0 sudo[152606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quzkoqrictsxoocamrceeacaemqjwexv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799099.5148866-775-233252054725135/AnsiballZ_stat.py'
Nov 22 08:11:39 compute-0 sudo[152606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:39 compute-0 python3.9[152608]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:39 compute-0 sudo[152606]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:40 compute-0 sudo[152729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyvrqovmecpbdkrboizavqqolqyuecdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799099.5148866-775-233252054725135/AnsiballZ_copy.py'
Nov 22 08:11:40 compute-0 sudo[152729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:40 compute-0 python3.9[152731]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799099.5148866-775-233252054725135/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:40 compute-0 sudo[152729]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:40 compute-0 sudo[152881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvbahktuavtnuyaebrdfviwfrtdklzkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799100.5872936-775-114342218893084/AnsiballZ_stat.py'
Nov 22 08:11:40 compute-0 sudo[152881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:41 compute-0 python3.9[152883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:41 compute-0 sudo[152881]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:41 compute-0 sudo[153004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrkiwloanxwqzizluwkdtpxwqngqrocw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799100.5872936-775-114342218893084/AnsiballZ_copy.py'
Nov 22 08:11:41 compute-0 sudo[153004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:41 compute-0 python3.9[153006]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799100.5872936-775-114342218893084/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:41 compute-0 sudo[153004]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:42 compute-0 podman[153130]: 2025-11-22 08:11:42.011154256 +0000 UTC m=+0.045381729 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 08:11:42 compute-0 python3.9[153170]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:11:42 compute-0 sudo[153327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxdzebxmifnaqfpzsrzkqbguniecvuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799102.391045-981-234634516675656/AnsiballZ_seboolean.py'
Nov 22 08:11:42 compute-0 sudo[153327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:43 compute-0 python3.9[153329]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 22 08:11:44 compute-0 sudo[153327]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:44 compute-0 sudo[153483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhfmkkrlegofuvmypkqomvqrgmazwmox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799104.4921944-989-7906570570741/AnsiballZ_copy.py'
Nov 22 08:11:44 compute-0 dbus-broker-launch[817]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 22 08:11:44 compute-0 sudo[153483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:44 compute-0 python3.9[153485]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:44 compute-0 sudo[153483]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:45 compute-0 sudo[153635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghfvwbiquzmyhzciksehfrzglxxfpjab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799105.1155846-989-62541821180737/AnsiballZ_copy.py'
Nov 22 08:11:45 compute-0 sudo[153635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:45 compute-0 python3.9[153637]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:45 compute-0 sudo[153635]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:45 compute-0 sudo[153787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyhwejiomhitaasdilvbcufdazcglclt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799105.713917-989-199151581198540/AnsiballZ_copy.py'
Nov 22 08:11:46 compute-0 sudo[153787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:46 compute-0 python3.9[153789]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:46 compute-0 sudo[153787]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:46 compute-0 sudo[153939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yewcdjugcjnmztahkqljbzhhmbloajos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799106.3645067-989-159703276239890/AnsiballZ_copy.py'
Nov 22 08:11:46 compute-0 sudo[153939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:46 compute-0 python3.9[153941]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:46 compute-0 sudo[153939]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:47 compute-0 sudo[154091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsitwicwowfziychcwbytievxfkbbjyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799106.9349725-989-107220517346477/AnsiballZ_copy.py'
Nov 22 08:11:47 compute-0 sudo[154091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:47 compute-0 python3.9[154093]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:47 compute-0 sudo[154091]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:47 compute-0 sudo[154243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqnkeebncqukoxlwnhvlsuzgolhyksiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799107.6919398-1025-161638845960346/AnsiballZ_copy.py'
Nov 22 08:11:47 compute-0 sudo[154243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:48 compute-0 python3.9[154245]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:48 compute-0 sudo[154243]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:48 compute-0 sudo[154395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukldptkmvkjakbthyaatupqbcbypbezf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799108.2580903-1025-79727482679550/AnsiballZ_copy.py'
Nov 22 08:11:48 compute-0 sudo[154395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:48 compute-0 python3.9[154397]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:48 compute-0 sudo[154395]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:49 compute-0 sudo[154547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mftlgwkltmzuizmzwsafmpvykdvkoywf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799108.9813008-1025-226375278904226/AnsiballZ_copy.py'
Nov 22 08:11:49 compute-0 sudo[154547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:49 compute-0 python3.9[154549]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:49 compute-0 sudo[154547]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:49 compute-0 sudo[154699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmrqkhwqicuehqrxwriywbsbezjhigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799109.6876628-1025-172741379222962/AnsiballZ_copy.py'
Nov 22 08:11:49 compute-0 sudo[154699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:50 compute-0 podman[154702]: 2025-11-22 08:11:50.153950981 +0000 UTC m=+0.099959815 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:11:50 compute-0 python3.9[154701]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:50 compute-0 sudo[154699]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:50 compute-0 sudo[154878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftqkobcwisfmyyzwupbjwmmwjikxihnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799110.3195004-1025-80460040341460/AnsiballZ_copy.py'
Nov 22 08:11:50 compute-0 sudo[154878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:50 compute-0 python3.9[154880]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:50 compute-0 sudo[154878]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:51 compute-0 sudo[155030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtmcfpfygulguffdeqmbxmsxsbphcjlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799110.9395537-1061-29355835143152/AnsiballZ_systemd.py'
Nov 22 08:11:51 compute-0 sudo[155030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:51 compute-0 python3.9[155032]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:11:51 compute-0 systemd[1]: Reloading.
Nov 22 08:11:51 compute-0 systemd-rc-local-generator[155060]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:11:51 compute-0 systemd-sysv-generator[155063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:11:51 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 22 08:11:51 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 22 08:11:51 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 22 08:11:51 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 22 08:11:51 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 22 08:11:51 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 22 08:11:52 compute-0 sudo[155030]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:52 compute-0 sudo[155223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnyyoyfpzjgdfposnzmaqbcilxnxllyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799112.1623392-1061-238921146505270/AnsiballZ_systemd.py'
Nov 22 08:11:52 compute-0 sudo[155223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:52 compute-0 python3.9[155225]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:11:52 compute-0 systemd[1]: Reloading.
Nov 22 08:11:52 compute-0 systemd-rc-local-generator[155252]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:11:52 compute-0 systemd-sysv-generator[155255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:11:53 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 22 08:11:53 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 22 08:11:53 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 22 08:11:53 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 22 08:11:53 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 22 08:11:53 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 22 08:11:53 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 08:11:53 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 22 08:11:53 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 22 08:11:53 compute-0 sudo[155223]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:53 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 22 08:11:53 compute-0 sudo[155440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdsnsmqqvfwxrdkzxlefecsbaaletzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799113.222105-1061-126255532838641/AnsiballZ_systemd.py'
Nov 22 08:11:53 compute-0 sudo[155440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:53 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 22 08:11:53 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 22 08:11:53 compute-0 python3.9[155443]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:11:53 compute-0 systemd[1]: Reloading.
Nov 22 08:11:53 compute-0 systemd-sysv-generator[155478]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:11:53 compute-0 systemd-rc-local-generator[155473]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:11:54 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 22 08:11:54 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 22 08:11:54 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 22 08:11:54 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 22 08:11:54 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 08:11:54 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 08:11:54 compute-0 sudo[155440]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:54 compute-0 setroubleshoot[155287]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe3a79f4-bf84-40b1-81ca-99803ca4a1fd
Nov 22 08:11:54 compute-0 setroubleshoot[155287]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 22 08:11:54 compute-0 setroubleshoot[155287]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe3a79f4-bf84-40b1-81ca-99803ca4a1fd
Nov 22 08:11:54 compute-0 setroubleshoot[155287]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 22 08:11:54 compute-0 sudo[155661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidalmeavpvhjrzxrhgpblljwztyxtzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799114.3344145-1061-260757703583282/AnsiballZ_systemd.py'
Nov 22 08:11:54 compute-0 sudo[155661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:54 compute-0 python3.9[155663]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:11:54 compute-0 systemd[1]: Reloading.
Nov 22 08:11:54 compute-0 systemd-rc-local-generator[155688]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:11:54 compute-0 systemd-sysv-generator[155692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:11:55 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 22 08:11:55 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 22 08:11:55 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 22 08:11:55 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 22 08:11:55 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 22 08:11:55 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 22 08:11:55 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 22 08:11:55 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 22 08:11:55 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 22 08:11:55 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 22 08:11:55 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 08:11:55 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 22 08:11:55 compute-0 sudo[155661]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:55 compute-0 sudo[155876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-groweojoarmwzgdbimhbrmaziawtmjnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799115.3845139-1061-91409201598109/AnsiballZ_systemd.py'
Nov 22 08:11:55 compute-0 sudo[155876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:55 compute-0 python3.9[155878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:11:55 compute-0 systemd[1]: Reloading.
Nov 22 08:11:56 compute-0 systemd-sysv-generator[155904]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:11:56 compute-0 systemd-rc-local-generator[155901]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:11:56 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 22 08:11:56 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 22 08:11:56 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 22 08:11:56 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 22 08:11:56 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 22 08:11:56 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 22 08:11:56 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 22 08:11:56 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 22 08:11:56 compute-0 sudo[155876]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:56 compute-0 sudo[156088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcwkdbsqucqqlxlcvcopmzxzkhknlwdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799116.5011015-1098-80901445699555/AnsiballZ_file.py'
Nov 22 08:11:56 compute-0 sudo[156088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:56 compute-0 python3.9[156090]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:56 compute-0 sudo[156088]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:57 compute-0 sudo[156240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojtpawvexvlttyjxejslhaffteoptud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799117.1498404-1106-42842885532336/AnsiballZ_find.py'
Nov 22 08:11:57 compute-0 sudo[156240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:57 compute-0 python3.9[156242]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:11:57 compute-0 sudo[156240]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:58 compute-0 sudo[156392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggprwbfmbcdavtutaxkoizcrthkgdyos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799117.9466999-1120-208831220825997/AnsiballZ_stat.py'
Nov 22 08:11:58 compute-0 sudo[156392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:58 compute-0 python3.9[156394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:11:58 compute-0 sudo[156392]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:58 compute-0 sudo[156515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emfxpadkjwfzsettezcvqvngvovucmjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799117.9466999-1120-208831220825997/AnsiballZ_copy.py'
Nov 22 08:11:58 compute-0 sudo[156515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:58 compute-0 python3.9[156517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799117.9466999-1120-208831220825997/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:58 compute-0 sudo[156515]: pam_unix(sudo:session): session closed for user root
Nov 22 08:11:59 compute-0 sudo[156668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzaxoyirpqrkrtwkdxzkmorjjoqqpomu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799119.2399144-1136-165342891735122/AnsiballZ_file.py'
Nov 22 08:11:59 compute-0 sudo[156668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:11:59 compute-0 python3.9[156670]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:11:59 compute-0 sudo[156668]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:00 compute-0 sudo[156820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umdeoeaxrstkjhbtephcwzbmyraesplv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799119.9054205-1144-231443856946893/AnsiballZ_stat.py'
Nov 22 08:12:00 compute-0 sudo[156820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:00 compute-0 python3.9[156822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:00 compute-0 sudo[156820]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:00 compute-0 sudo[156898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdtexnshzlndawfbuylggauuqqrvtqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799119.9054205-1144-231443856946893/AnsiballZ_file.py'
Nov 22 08:12:00 compute-0 sudo[156898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:00 compute-0 python3.9[156900]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:00 compute-0 sudo[156898]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:01 compute-0 sudo[157050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqnbcvshxkwkbaejnocoxmqljvrekngx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799121.01792-1156-214215648560846/AnsiballZ_stat.py'
Nov 22 08:12:01 compute-0 sudo[157050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:01 compute-0 python3.9[157052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:01 compute-0 sudo[157050]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:01 compute-0 sudo[157128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvelunovpiysmffgavfqogfvujjjpoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799121.01792-1156-214215648560846/AnsiballZ_file.py'
Nov 22 08:12:01 compute-0 sudo[157128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:02 compute-0 python3.9[157130]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._p83abv2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:02 compute-0 sudo[157128]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:02 compute-0 sudo[157280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whwnkygnogjziufgahoofxlrjwdzpwgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799122.3911548-1168-140664442767496/AnsiballZ_stat.py'
Nov 22 08:12:02 compute-0 sudo[157280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:02 compute-0 python3.9[157282]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:02 compute-0 sudo[157280]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:03 compute-0 sudo[157358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbgdpeiuofqecotxhsrobiqzjtwfdnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799122.3911548-1168-140664442767496/AnsiballZ_file.py'
Nov 22 08:12:03 compute-0 sudo[157358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:03 compute-0 python3.9[157360]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:03 compute-0 sudo[157358]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:03 compute-0 sudo[157510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvvbfuyweedispftrenjoamvfqawitzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799123.5819743-1181-107936257022053/AnsiballZ_command.py'
Nov 22 08:12:03 compute-0 sudo[157510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:04 compute-0 python3.9[157512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:12:04 compute-0 sudo[157510]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:04 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 22 08:12:04 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 22 08:12:04 compute-0 sudo[157663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhxfifilqtguvlbvosfvuiyupnmupxsi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799124.3254654-1189-41314145326443/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 08:12:04 compute-0 sudo[157663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:04 compute-0 python3[157665]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 08:12:05 compute-0 sudo[157663]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:05 compute-0 sudo[157815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwsdzrqvwntceeyqezvqpnsslceyotxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799125.2591555-1197-218596580702070/AnsiballZ_stat.py'
Nov 22 08:12:05 compute-0 sudo[157815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:05 compute-0 python3.9[157817]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:05 compute-0 sudo[157815]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:06 compute-0 sudo[157893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzvtgypkzefuqrvwtfxlyqzpmygnawno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799125.2591555-1197-218596580702070/AnsiballZ_file.py'
Nov 22 08:12:06 compute-0 sudo[157893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:06 compute-0 python3.9[157895]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:06 compute-0 sudo[157893]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:06 compute-0 sudo[158045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrujhlmrnhumifnwafhtkmjwudmmvpxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799126.6387038-1209-199747491339122/AnsiballZ_stat.py'
Nov 22 08:12:06 compute-0 sudo[158045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:07 compute-0 python3.9[158047]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:07 compute-0 sudo[158045]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:07 compute-0 sudo[158123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuhealnolbpxhvdlsaaqtcvhokuylxan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799126.6387038-1209-199747491339122/AnsiballZ_file.py'
Nov 22 08:12:07 compute-0 sudo[158123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:07 compute-0 python3.9[158125]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:07 compute-0 sudo[158123]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:08 compute-0 sudo[158275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuidsbjyiubdqqgiuxzyilcbspgsyqhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799127.7861314-1221-137039711004309/AnsiballZ_stat.py'
Nov 22 08:12:08 compute-0 sudo[158275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:08 compute-0 python3.9[158277]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:08 compute-0 sudo[158275]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:08 compute-0 sudo[158353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czuoetxekilyvrgtqozzwgnbjntlqqpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799127.7861314-1221-137039711004309/AnsiballZ_file.py'
Nov 22 08:12:08 compute-0 sudo[158353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:08 compute-0 python3.9[158355]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:08 compute-0 sudo[158353]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:09 compute-0 sudo[158505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqdgwtpviqgqnqdchkycwwpkycldhwke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799129.0413997-1233-244011695197860/AnsiballZ_stat.py'
Nov 22 08:12:09 compute-0 sudo[158505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:09 compute-0 python3.9[158507]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:09 compute-0 sudo[158505]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:09 compute-0 sudo[158583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhkathfckekpcimnatvspeamdiadpfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799129.0413997-1233-244011695197860/AnsiballZ_file.py'
Nov 22 08:12:09 compute-0 sudo[158583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:12:09.938 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:12:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:12:09.940 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:12:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:12:09.940 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:12:10 compute-0 python3.9[158585]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:10 compute-0 sudo[158583]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:10 compute-0 sudo[158735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovuslxzlzjowlddcicnkqfhuecxckcge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799130.4602137-1245-88970721525515/AnsiballZ_stat.py'
Nov 22 08:12:10 compute-0 sudo[158735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:11 compute-0 python3.9[158737]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:11 compute-0 sudo[158735]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:11 compute-0 sudo[158860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etbugyyrirckwglcpylpvfyzenxcbefs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799130.4602137-1245-88970721525515/AnsiballZ_copy.py'
Nov 22 08:12:11 compute-0 sudo[158860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:11 compute-0 python3.9[158862]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799130.4602137-1245-88970721525515/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:11 compute-0 sudo[158860]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:12 compute-0 sudo[159026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkbpbvzdnsdgrkfdpgzuspmfpohiyyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799131.9499285-1260-241268846329144/AnsiballZ_file.py'
Nov 22 08:12:12 compute-0 sudo[159026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:12 compute-0 podman[158986]: 2025-11-22 08:12:12.26656124 +0000 UTC m=+0.086853311 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:12:12 compute-0 python3.9[159031]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:12 compute-0 sudo[159026]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:12 compute-0 sudo[159182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmnnksqlojbclvbxiyakmnexzsqtesib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799132.6604679-1268-128791509086820/AnsiballZ_command.py'
Nov 22 08:12:12 compute-0 sudo[159182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:13 compute-0 python3.9[159184]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:12:13 compute-0 sudo[159182]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:13 compute-0 sudo[159337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynwuuanjylgmhpatwlhbauxdiufxkpph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799133.3318858-1276-50537793565111/AnsiballZ_blockinfile.py'
Nov 22 08:12:13 compute-0 sudo[159337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:13 compute-0 python3.9[159339]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:13 compute-0 sudo[159337]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:14 compute-0 sudo[159489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bechkwwmmjrvnuzbtbjvpjlvkmzeokjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799134.2056231-1285-33892776016602/AnsiballZ_command.py'
Nov 22 08:12:14 compute-0 sudo[159489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:14 compute-0 python3.9[159491]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:12:14 compute-0 sudo[159489]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:15 compute-0 sudo[159642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eneuiyujitcatxlqeiklqfpnyqyvuuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799134.9773152-1293-7485445461647/AnsiballZ_stat.py'
Nov 22 08:12:15 compute-0 sudo[159642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:15 compute-0 python3.9[159644]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:12:15 compute-0 sudo[159642]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:16 compute-0 sudo[159796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saeatcahzuqmrqsccbcllnyqusagdegw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799135.6786284-1301-109580202678480/AnsiballZ_command.py'
Nov 22 08:12:16 compute-0 sudo[159796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:16 compute-0 python3.9[159798]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:12:16 compute-0 sudo[159796]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:16 compute-0 sudo[159951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rklcoxhffdfofgohjnsjylmyhtxlfcom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799136.5292199-1309-241010207597806/AnsiballZ_file.py'
Nov 22 08:12:16 compute-0 sudo[159951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:17 compute-0 python3.9[159953]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:17 compute-0 sudo[159951]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:17 compute-0 sudo[160103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udyhcrhogbfhfcmmbxgddhjozgppttjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799137.2163548-1317-197528547320758/AnsiballZ_stat.py'
Nov 22 08:12:17 compute-0 sudo[160103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:17 compute-0 python3.9[160105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:17 compute-0 sudo[160103]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:18 compute-0 sudo[160226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbpuiisoawcdixsvaxeteobyqvqaflmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799137.2163548-1317-197528547320758/AnsiballZ_copy.py'
Nov 22 08:12:18 compute-0 sudo[160226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:18 compute-0 python3.9[160228]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799137.2163548-1317-197528547320758/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:18 compute-0 sudo[160226]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:18 compute-0 sudo[160378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slliriockumkmncbbrmgrujokwebdzir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799138.4317956-1332-118204717573590/AnsiballZ_stat.py'
Nov 22 08:12:18 compute-0 sudo[160378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:18 compute-0 python3.9[160380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:18 compute-0 sudo[160378]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:19 compute-0 sudo[160501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddsompyjqbyjbrtkkxutmgfsoxxrcpfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799138.4317956-1332-118204717573590/AnsiballZ_copy.py'
Nov 22 08:12:19 compute-0 sudo[160501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:19 compute-0 python3.9[160503]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799138.4317956-1332-118204717573590/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:19 compute-0 sudo[160501]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:19 compute-0 sudo[160653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwvywwkjobjoyceukulecoasasujhdgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799139.6983502-1347-107423352458481/AnsiballZ_stat.py'
Nov 22 08:12:19 compute-0 sudo[160653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:20 compute-0 python3.9[160655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:20 compute-0 sudo[160653]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:20 compute-0 sudo[160791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjjbjxdzaorrneflqeqlpzhipuilncca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799139.6983502-1347-107423352458481/AnsiballZ_copy.py'
Nov 22 08:12:20 compute-0 sudo[160791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:20 compute-0 podman[160750]: 2025-11-22 08:12:20.616432498 +0000 UTC m=+0.087778626 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 08:12:20 compute-0 python3.9[160798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799139.6983502-1347-107423352458481/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:20 compute-0 sudo[160791]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:21 compute-0 sudo[160954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iklhzbqkbswwhcdcdkfsppyykpdggfka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799141.0398576-1362-192913737826596/AnsiballZ_systemd.py'
Nov 22 08:12:21 compute-0 sudo[160954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:21 compute-0 python3.9[160956]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:12:21 compute-0 systemd[1]: Reloading.
Nov 22 08:12:21 compute-0 systemd-rc-local-generator[160979]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:12:21 compute-0 systemd-sysv-generator[160986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:12:22 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 22 08:12:22 compute-0 sudo[160954]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:22 compute-0 sudo[161145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjceitehzaghslntkmtkqelswvspwgef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799142.335327-1370-36154917694941/AnsiballZ_systemd.py'
Nov 22 08:12:22 compute-0 sudo[161145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:22 compute-0 python3.9[161147]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 08:12:23 compute-0 systemd[1]: Reloading.
Nov 22 08:12:23 compute-0 systemd-sysv-generator[161175]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:12:23 compute-0 systemd-rc-local-generator[161172]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:12:23 compute-0 systemd[1]: Reloading.
Nov 22 08:12:23 compute-0 systemd-sysv-generator[161216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:12:23 compute-0 systemd-rc-local-generator[161212]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:12:23 compute-0 sudo[161145]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:24 compute-0 sshd-session[106762]: Connection closed by 192.168.122.30 port 38942
Nov 22 08:12:24 compute-0 sshd-session[106759]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:12:24 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Nov 22 08:12:24 compute-0 systemd[1]: session-23.scope: Consumed 3min 20.169s CPU time.
Nov 22 08:12:24 compute-0 systemd-logind[826]: Session 23 logged out. Waiting for processes to exit.
Nov 22 08:12:24 compute-0 systemd-logind[826]: Removed session 23.
Nov 22 08:12:30 compute-0 sshd-session[161244]: Accepted publickey for zuul from 192.168.122.30 port 54144 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:12:30 compute-0 systemd-logind[826]: New session 24 of user zuul.
Nov 22 08:12:30 compute-0 systemd[1]: Started Session 24 of User zuul.
Nov 22 08:12:30 compute-0 sshd-session[161244]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:12:31 compute-0 python3.9[161397]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:12:32 compute-0 python3.9[161551]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:12:32 compute-0 network[161568]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:12:32 compute-0 network[161569]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:12:32 compute-0 network[161570]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:12:35 compute-0 sudo[161839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npkerlllqdacxycarqevgkucidijxant ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799155.638227-47-267257658826806/AnsiballZ_setup.py'
Nov 22 08:12:35 compute-0 sudo[161839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:36 compute-0 python3.9[161841]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:12:36 compute-0 sudo[161839]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:37 compute-0 sudo[161923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfwhjxmcsdtpygxmuihpbwsfgmcftpjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799155.638227-47-267257658826806/AnsiballZ_dnf.py'
Nov 22 08:12:37 compute-0 sudo[161923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:37 compute-0 python3.9[161925]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:12:42 compute-0 sudo[161923]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:42 compute-0 sudo[162088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiwklewpeqtxeqlvnswmrderlnmybdcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799162.4157262-59-186040085476546/AnsiballZ_stat.py'
Nov 22 08:12:42 compute-0 sudo[162088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:42 compute-0 podman[162050]: 2025-11-22 08:12:42.833109395 +0000 UTC m=+0.060235297 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 22 08:12:43 compute-0 python3.9[162094]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:12:43 compute-0 sudo[162088]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:43 compute-0 sudo[162245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggeypckfspsjzrbezjfycghpcnimrjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799163.2218878-69-222230230891142/AnsiballZ_command.py'
Nov 22 08:12:43 compute-0 sudo[162245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:43 compute-0 python3.9[162247]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:12:43 compute-0 sudo[162245]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:44 compute-0 sudo[162398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdhxxxbsfamcukmyxfuyohmohzgjtkqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799164.0575733-79-165283241978962/AnsiballZ_stat.py'
Nov 22 08:12:44 compute-0 sudo[162398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:44 compute-0 python3.9[162400]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:12:44 compute-0 sudo[162398]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:45 compute-0 sudo[162550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vivzkfpdajmdkknuiajzpkjkkpizsyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799164.6848953-87-104936506678928/AnsiballZ_command.py'
Nov 22 08:12:45 compute-0 sudo[162550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:45 compute-0 python3.9[162552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:12:45 compute-0 sudo[162550]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:45 compute-0 sudo[162703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpdahucygseohyfhnddmvjptqduzvnhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799165.3699944-95-206080036969321/AnsiballZ_stat.py'
Nov 22 08:12:45 compute-0 sudo[162703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:45 compute-0 python3.9[162705]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:45 compute-0 sudo[162703]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:46 compute-0 sudo[162826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utvvwxixnvymcrvrzafpfeogsamkcsgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799165.3699944-95-206080036969321/AnsiballZ_copy.py'
Nov 22 08:12:46 compute-0 sudo[162826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:46 compute-0 python3.9[162828]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799165.3699944-95-206080036969321/.source.iscsi _original_basename=.5wymyift follow=False checksum=3a8dd170d3163675bc30b0dd01771f4ac6873ff0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:46 compute-0 sudo[162826]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:47 compute-0 sudo[162978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iunrcryoycgztlhkntipuehnmalvwkel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799166.7850995-110-39213031232446/AnsiballZ_file.py'
Nov 22 08:12:47 compute-0 sudo[162978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:47 compute-0 python3.9[162980]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:47 compute-0 sudo[162978]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:48 compute-0 sudo[163130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nedibqsnvzusynnqhlacjkxqtuvkysya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799167.6987262-118-168298464989729/AnsiballZ_lineinfile.py'
Nov 22 08:12:48 compute-0 sudo[163130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:48 compute-0 python3.9[163132]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:48 compute-0 sudo[163130]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:48 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:12:48 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:12:49 compute-0 sudo[163283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbadkeszfjkkrskkzaodiufhxxnqjlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799168.5815196-127-45145039762842/AnsiballZ_systemd_service.py'
Nov 22 08:12:49 compute-0 sudo[163283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:49 compute-0 python3.9[163285]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:12:49 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 22 08:12:49 compute-0 sudo[163283]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:50 compute-0 sudo[163439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sctmeckyeuhaqrfsrlsjrfrkavqamfht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799169.7834525-135-276991152667689/AnsiballZ_systemd_service.py'
Nov 22 08:12:50 compute-0 sudo[163439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:50 compute-0 python3.9[163441]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:12:50 compute-0 systemd[1]: Reloading.
Nov 22 08:12:50 compute-0 systemd-sysv-generator[163474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:12:50 compute-0 systemd-rc-local-generator[163470]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:12:50 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 08:12:50 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 22 08:12:50 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 22 08:12:50 compute-0 systemd[1]: Started Open-iSCSI.
Nov 22 08:12:50 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 22 08:12:50 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 22 08:12:50 compute-0 sudo[163439]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:50 compute-0 podman[163479]: 2025-11-22 08:12:50.846763385 +0000 UTC m=+0.124296286 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:12:51 compute-0 sudo[163664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bywfoymjzurmqzhaudhiuxxpigaqakgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799171.1599176-146-79328341340524/AnsiballZ_service_facts.py'
Nov 22 08:12:51 compute-0 sudo[163664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:51 compute-0 python3.9[163666]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:12:51 compute-0 network[163683]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:12:51 compute-0 network[163684]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:12:51 compute-0 network[163685]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:12:56 compute-0 sudo[163664]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:56 compute-0 sudo[163954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdhkjakddbbtvnpofqzogzlbminpyook ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799176.3206167-156-190503063372826/AnsiballZ_file.py'
Nov 22 08:12:56 compute-0 sudo[163954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:56 compute-0 python3.9[163956]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 08:12:56 compute-0 sudo[163954]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:57 compute-0 sudo[164106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqsaijdrrcphryluthghoxmhjhpnqiag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799177.0322635-164-210082005395579/AnsiballZ_modprobe.py'
Nov 22 08:12:57 compute-0 sudo[164106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:57 compute-0 python3.9[164108]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 22 08:12:57 compute-0 sudo[164106]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:58 compute-0 sudo[164262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcnjlpfrljefipdukpbioykcsjspwrzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799177.8575563-172-239272831673113/AnsiballZ_stat.py'
Nov 22 08:12:58 compute-0 sudo[164262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:58 compute-0 python3.9[164264]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:12:58 compute-0 sudo[164262]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:58 compute-0 sudo[164386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmmdobuthlzmgdjnxwbycdrlnrfxwxbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799177.8575563-172-239272831673113/AnsiballZ_copy.py'
Nov 22 08:12:58 compute-0 sudo[164386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:58 compute-0 python3.9[164388]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799177.8575563-172-239272831673113/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:58 compute-0 sudo[164386]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:59 compute-0 sudo[164539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrmsxwypbiffiabxzbiojvjdauupzqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799179.06427-188-251993095184688/AnsiballZ_lineinfile.py'
Nov 22 08:12:59 compute-0 sudo[164539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:12:59 compute-0 sshd-session[164366]: Invalid user loginuser from 80.94.92.164 port 45146
Nov 22 08:12:59 compute-0 python3.9[164541]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:12:59 compute-0 sudo[164539]: pam_unix(sudo:session): session closed for user root
Nov 22 08:12:59 compute-0 sshd-session[164366]: Connection closed by invalid user loginuser 80.94.92.164 port 45146 [preauth]
Nov 22 08:13:00 compute-0 sudo[164691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywrugmpyfvcccbdetuzesuwypjdatjtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799179.8975146-196-239572454461777/AnsiballZ_systemd.py'
Nov 22 08:13:00 compute-0 sudo[164691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:00 compute-0 python3.9[164693]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:13:00 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 08:13:00 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 22 08:13:00 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 22 08:13:00 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 08:13:00 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 08:13:00 compute-0 sudo[164691]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:01 compute-0 sudo[164847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqejruquvgybnprkwuvtxiyuprxkokph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799181.1074507-204-31949697199434/AnsiballZ_file.py'
Nov 22 08:13:01 compute-0 sudo[164847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:01 compute-0 python3.9[164849]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:13:01 compute-0 sudo[164847]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:02 compute-0 sudo[164999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwyhxeckpvbobvxwakkmodchhszajrml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799181.8514633-213-16026269515753/AnsiballZ_stat.py'
Nov 22 08:13:02 compute-0 sudo[164999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:02 compute-0 python3.9[165001]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:13:02 compute-0 sudo[164999]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:02 compute-0 sudo[165151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miviyukgkbydrbyyetuynxbwkhtmsdbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799182.4908545-222-179216945560917/AnsiballZ_stat.py'
Nov 22 08:13:02 compute-0 sudo[165151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:02 compute-0 python3.9[165153]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:13:02 compute-0 sudo[165151]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:03 compute-0 sudo[165303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzpmegwuktliztmmshhjtxxotpeylroe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799183.1551883-230-118281427391420/AnsiballZ_stat.py'
Nov 22 08:13:03 compute-0 sudo[165303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:03 compute-0 python3.9[165305]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:03 compute-0 sudo[165303]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:04 compute-0 sudo[165426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swocmaxairiuufrkakptcnjmzwkxgnts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799183.1551883-230-118281427391420/AnsiballZ_copy.py'
Nov 22 08:13:04 compute-0 sudo[165426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:04 compute-0 python3.9[165428]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799183.1551883-230-118281427391420/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:04 compute-0 sudo[165426]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:04 compute-0 sudo[165578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyaexahkyaqrvjmcfqtbiqjhyrimqzjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799184.6748998-245-153621313335812/AnsiballZ_command.py'
Nov 22 08:13:04 compute-0 sudo[165578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:05 compute-0 python3.9[165580]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:13:05 compute-0 sudo[165578]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:05 compute-0 sudo[165731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfmiidpmphsykhpuzlfuwwnfrgsvrju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799185.3559856-253-196562492315121/AnsiballZ_lineinfile.py'
Nov 22 08:13:05 compute-0 sudo[165731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:05 compute-0 python3.9[165733]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:05 compute-0 sudo[165731]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:06 compute-0 sudo[165884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwwsgedkoktmpnwuxuhspqlmelotbfup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799185.9953516-261-265272787268657/AnsiballZ_replace.py'
Nov 22 08:13:06 compute-0 sudo[165884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:06 compute-0 python3.9[165886]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:06 compute-0 sudo[165884]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:07 compute-0 sudo[166036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lufechxpybjjqmvnuafkoquerdphagvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799186.8771152-269-30591783242098/AnsiballZ_replace.py'
Nov 22 08:13:07 compute-0 sudo[166036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:07 compute-0 python3.9[166038]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:07 compute-0 sudo[166036]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:07 compute-0 sudo[166188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptvtsiwnbuqhmbxjilethhjlpkjmotym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799187.5809152-278-193331958409336/AnsiballZ_lineinfile.py'
Nov 22 08:13:07 compute-0 sudo[166188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:08 compute-0 python3.9[166190]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:08 compute-0 sudo[166188]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:08 compute-0 sudo[166340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxtmshqohimejqsnorzoruldahbkwpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799188.2038772-278-139944615136066/AnsiballZ_lineinfile.py'
Nov 22 08:13:08 compute-0 sudo[166340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:08 compute-0 python3.9[166342]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:08 compute-0 sudo[166340]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:09 compute-0 sudo[166492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpygszngdtzeumjfkhgouukkjmcxfrha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799188.991491-278-129879057755780/AnsiballZ_lineinfile.py'
Nov 22 08:13:09 compute-0 sudo[166492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:09 compute-0 python3.9[166494]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:09 compute-0 sudo[166492]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:09 compute-0 sudo[166644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glawoyzcctzswuqookrlkzqnelczqafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799189.5903766-278-66605714707332/AnsiballZ_lineinfile.py'
Nov 22 08:13:09 compute-0 sudo[166644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:13:09.941 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:13:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:13:09.942 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:13:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:13:09.943 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:13:10 compute-0 python3.9[166646]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:10 compute-0 sudo[166644]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:10 compute-0 sudo[166796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adrwblsidurqalogubaluevtigwjktpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799190.2708926-307-47529701560738/AnsiballZ_stat.py'
Nov 22 08:13:10 compute-0 sudo[166796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:10 compute-0 python3.9[166798]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:13:10 compute-0 sudo[166796]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:11 compute-0 sudo[166950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfuskxkivqskkcpswvitwspyorcptgvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799191.0131896-315-60962994923642/AnsiballZ_file.py'
Nov 22 08:13:11 compute-0 sudo[166950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:11 compute-0 python3.9[166952]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:11 compute-0 sudo[166950]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:11 compute-0 sudo[167102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekoyglgotoakaojkeztqkmpzwsuefizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799191.727887-324-235735199209222/AnsiballZ_file.py'
Nov 22 08:13:11 compute-0 sudo[167102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:12 compute-0 python3.9[167104]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:13:12 compute-0 sudo[167102]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:12 compute-0 sudo[167254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbxihtbuwwcvlmvrmbezrplaxyrrvaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799192.36415-332-35227899465625/AnsiballZ_stat.py'
Nov 22 08:13:12 compute-0 sudo[167254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:12 compute-0 python3.9[167256]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:12 compute-0 sudo[167254]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:13 compute-0 sudo[167347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qixrkgwifzbblfuhysfwaqinvpjzshxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799192.36415-332-35227899465625/AnsiballZ_file.py'
Nov 22 08:13:13 compute-0 sudo[167347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:13 compute-0 podman[167306]: 2025-11-22 08:13:13.093324136 +0000 UTC m=+0.062163258 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:13:13 compute-0 python3.9[167353]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:13:13 compute-0 sudo[167347]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:13 compute-0 sudo[167503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uifxuyeizofzjwhnvirwmmkscwvwugcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799193.4034045-332-197914977799875/AnsiballZ_stat.py'
Nov 22 08:13:13 compute-0 sudo[167503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:13 compute-0 python3.9[167505]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:14 compute-0 sudo[167503]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:14 compute-0 sudo[167581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boteioqekzzsabfsfiiitqkcfwrgspfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799193.4034045-332-197914977799875/AnsiballZ_file.py'
Nov 22 08:13:14 compute-0 sudo[167581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:14 compute-0 python3.9[167583]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:13:14 compute-0 sudo[167581]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:14 compute-0 sudo[167733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biquhbkpgumwrkihrxqlikmqqvoslzmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799194.6074464-355-259852052878407/AnsiballZ_file.py'
Nov 22 08:13:14 compute-0 sudo[167733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:15 compute-0 python3.9[167735]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:15 compute-0 sudo[167733]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:15 compute-0 sudo[167885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wahrcpsqytxiqsscoogtkbqflhpjlsai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799195.233375-363-182385871114804/AnsiballZ_stat.py'
Nov 22 08:13:15 compute-0 sudo[167885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:15 compute-0 python3.9[167887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:15 compute-0 sudo[167885]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:15 compute-0 sudo[167963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esfrzxaljvuajbfrmrqqfxvzlepmudxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799195.233375-363-182385871114804/AnsiballZ_file.py'
Nov 22 08:13:15 compute-0 sudo[167963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:16 compute-0 python3.9[167965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:16 compute-0 sudo[167963]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:16 compute-0 sudo[168115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kolpcwzkoqeyidnjdyeoyvvevmhmnqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799196.307078-375-225803690738977/AnsiballZ_stat.py'
Nov 22 08:13:16 compute-0 sudo[168115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:16 compute-0 python3.9[168117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:16 compute-0 sudo[168115]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:17 compute-0 sudo[168193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpefwbcgwmwasonjeyejgthvqrhzswya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799196.307078-375-225803690738977/AnsiballZ_file.py'
Nov 22 08:13:17 compute-0 sudo[168193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:17 compute-0 python3.9[168195]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:17 compute-0 sudo[168193]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:17 compute-0 sudo[168345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsbwvuaanvdijbthenyryiyrgedotowe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799197.4435842-387-180263534994088/AnsiballZ_systemd.py'
Nov 22 08:13:17 compute-0 sudo[168345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:18 compute-0 python3.9[168347]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:13:18 compute-0 systemd[1]: Reloading.
Nov 22 08:13:18 compute-0 systemd-rc-local-generator[168372]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:18 compute-0 systemd-sysv-generator[168375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:18 compute-0 sudo[168345]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:18 compute-0 sudo[168533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjuhdxrtzoyaadtfhpomsuhehssfqcyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799198.6746297-395-61652157250722/AnsiballZ_stat.py'
Nov 22 08:13:18 compute-0 sudo[168533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:19 compute-0 python3.9[168535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:19 compute-0 sudo[168533]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:19 compute-0 sudo[168611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mppetsfntahoxqqbhufwasqnapfokhqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799198.6746297-395-61652157250722/AnsiballZ_file.py'
Nov 22 08:13:19 compute-0 sudo[168611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:19 compute-0 python3.9[168613]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:19 compute-0 sudo[168611]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:20 compute-0 sudo[168763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opiwhbppchoumglebwsazgsvgxbbokgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799200.356165-407-49377853338464/AnsiballZ_stat.py'
Nov 22 08:13:20 compute-0 sudo[168763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:20 compute-0 python3.9[168765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:20 compute-0 sudo[168763]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:21 compute-0 sudo[168851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdeftfgyqikrknkgqdohoxxlddgnxteq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799200.356165-407-49377853338464/AnsiballZ_file.py'
Nov 22 08:13:21 compute-0 sudo[168851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:21 compute-0 podman[168815]: 2025-11-22 08:13:21.133389874 +0000 UTC m=+0.096664986 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 08:13:21 compute-0 python3.9[168861]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:21 compute-0 sudo[168851]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:22 compute-0 sudo[169020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeuktkmqfrjjccwnakfzxderguwiafdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799201.4408686-419-77455748547184/AnsiballZ_systemd.py'
Nov 22 08:13:22 compute-0 sudo[169020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:22 compute-0 python3.9[169022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:13:22 compute-0 systemd[1]: Reloading.
Nov 22 08:13:22 compute-0 systemd-sysv-generator[169052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:22 compute-0 systemd-rc-local-generator[169049]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:22 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 08:13:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 08:13:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 08:13:22 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 08:13:22 compute-0 sudo[169020]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:23 compute-0 sudo[169212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejvocduygcucsogafjbdpuvuevxyeaxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799203.2291696-429-103915394205404/AnsiballZ_file.py'
Nov 22 08:13:23 compute-0 sudo[169212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:23 compute-0 python3.9[169214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:13:23 compute-0 sudo[169212]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:24 compute-0 sudo[169364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxwywbiwauxhwopdfxedgwgrxjlelqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799203.9919837-437-211542272904631/AnsiballZ_stat.py'
Nov 22 08:13:24 compute-0 sudo[169364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:24 compute-0 python3.9[169366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:24 compute-0 sudo[169364]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:24 compute-0 sudo[169487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loiwphfwpumqjexzxzqotoozakmbwnih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799203.9919837-437-211542272904631/AnsiballZ_copy.py'
Nov 22 08:13:24 compute-0 sudo[169487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:24 compute-0 python3.9[169489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799203.9919837-437-211542272904631/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:13:24 compute-0 sudo[169487]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:25 compute-0 sudo[169639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mthwsjfgtntcodvgounejbdbmifjlgan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799205.4055173-454-4013653151163/AnsiballZ_file.py'
Nov 22 08:13:25 compute-0 sudo[169639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:26 compute-0 python3.9[169641]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:13:26 compute-0 sudo[169639]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:27 compute-0 sudo[169791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfkfjjutfjlgramoqxvtwbscafnssdex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799207.13047-462-274178418741775/AnsiballZ_stat.py'
Nov 22 08:13:27 compute-0 sudo[169791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:28 compute-0 python3.9[169793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:28 compute-0 sudo[169791]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:28 compute-0 sudo[169914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhkhrkleomisphtemszvkqqahsuwfso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799207.13047-462-274178418741775/AnsiballZ_copy.py'
Nov 22 08:13:28 compute-0 sudo[169914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:28 compute-0 python3.9[169916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799207.13047-462-274178418741775/.source.json _original_basename=.xxz8nzrt follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:28 compute-0 sudo[169914]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:29 compute-0 sudo[170066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzwzwwtziqexectzamsgnfrrpageenba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799208.882023-477-258967170846815/AnsiballZ_file.py'
Nov 22 08:13:29 compute-0 sudo[170066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:29 compute-0 python3.9[170068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:29 compute-0 sudo[170066]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:29 compute-0 sudo[170218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iopkhdhitghdjjzitlsxenyfasnnlyoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799209.5210884-485-267992300460362/AnsiballZ_stat.py'
Nov 22 08:13:29 compute-0 sudo[170218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:29 compute-0 sudo[170218]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:30 compute-0 sudo[170341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuztblzcstfcinztirenyadrhggnlbbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799209.5210884-485-267992300460362/AnsiballZ_copy.py'
Nov 22 08:13:30 compute-0 sudo[170341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:30 compute-0 sudo[170341]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:31 compute-0 sudo[170493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naeutfwooddejbfrspkzjuckwclqhxvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799210.810443-502-29948400375124/AnsiballZ_container_config_data.py'
Nov 22 08:13:31 compute-0 sudo[170493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:31 compute-0 python3.9[170495]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 22 08:13:31 compute-0 sudo[170493]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:32 compute-0 sudo[170645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwgadvijvtkojzemasadzjyajlqwgouf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799211.639782-511-222909598703846/AnsiballZ_container_config_hash.py'
Nov 22 08:13:32 compute-0 sudo[170645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:32 compute-0 python3.9[170647]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:13:32 compute-0 sudo[170645]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:32 compute-0 sudo[170797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttiihkcdztiniqvwfkcivaenwdjtninw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799212.489937-520-79443775138459/AnsiballZ_podman_container_info.py'
Nov 22 08:13:32 compute-0 sudo[170797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:33 compute-0 python3.9[170799]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 08:13:33 compute-0 sudo[170797]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:34 compute-0 sudo[170975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfpydipqnufhvnwgkhkxwlobogfvgnbg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799213.9588058-533-8897534268781/AnsiballZ_edpm_container_manage.py'
Nov 22 08:13:34 compute-0 sudo[170975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:34 compute-0 python3[170977]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:13:34 compute-0 podman[171013]: 2025-11-22 08:13:34.922343463 +0000 UTC m=+0.050306248 container create 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd)
Nov 22 08:13:34 compute-0 podman[171013]: 2025-11-22 08:13:34.89550391 +0000 UTC m=+0.023466715 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 08:13:34 compute-0 python3[170977]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 08:13:35 compute-0 sudo[170975]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:35 compute-0 sudo[171199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfrefyejywkllrryykiphryrogqaiefo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799215.2370152-541-104039057452429/AnsiballZ_stat.py'
Nov 22 08:13:35 compute-0 sudo[171199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:35 compute-0 python3.9[171201]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:13:35 compute-0 sudo[171199]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:36 compute-0 sudo[171353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofxqhtzxwhrysdtgowrqafruorpvuksa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799215.9279213-550-167138908145413/AnsiballZ_file.py'
Nov 22 08:13:36 compute-0 sudo[171353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:36 compute-0 python3.9[171355]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:36 compute-0 sudo[171353]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:36 compute-0 sudo[171429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wltxdnhpuydifozknhyhlvlixviynmfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799215.9279213-550-167138908145413/AnsiballZ_stat.py'
Nov 22 08:13:36 compute-0 sudo[171429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:36 compute-0 python3.9[171431]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:13:36 compute-0 sudo[171429]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:37 compute-0 sudo[171580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnfyerzcwbojgghdwxdgzcjmcnvsjyzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799216.8889384-550-121930543247405/AnsiballZ_copy.py'
Nov 22 08:13:37 compute-0 sudo[171580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:37 compute-0 python3.9[171582]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799216.8889384-550-121930543247405/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:37 compute-0 sudo[171580]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:38 compute-0 sudo[171656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bizgntnydvkgfppjoemslffzaatdgedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799216.8889384-550-121930543247405/AnsiballZ_systemd.py'
Nov 22 08:13:38 compute-0 sudo[171656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:38 compute-0 python3.9[171658]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:13:38 compute-0 systemd[1]: Reloading.
Nov 22 08:13:38 compute-0 systemd-sysv-generator[171688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:38 compute-0 systemd-rc-local-generator[171685]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:38 compute-0 sudo[171656]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:39 compute-0 sudo[171766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdxpjvhhdnjklsqiwoteqwfewuwwlfqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799216.8889384-550-121930543247405/AnsiballZ_systemd.py'
Nov 22 08:13:39 compute-0 sudo[171766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:39 compute-0 python3.9[171768]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:13:39 compute-0 systemd[1]: Reloading.
Nov 22 08:13:39 compute-0 systemd-sysv-generator[171796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:39 compute-0 systemd-rc-local-generator[171793]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:39 compute-0 systemd[1]: Starting multipathd container...
Nov 22 08:13:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529f7bb9019580c57fbafbb5bc773eb609bb2fac9b918d4f6e98c55f5db340eb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 08:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529f7bb9019580c57fbafbb5bc773eb609bb2fac9b918d4f6e98c55f5db340eb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 08:13:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.
Nov 22 08:13:40 compute-0 podman[171808]: 2025-11-22 08:13:40.277878402 +0000 UTC m=+0.493501440 container init 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:13:40 compute-0 multipathd[171822]: + sudo -E kolla_set_configs
Nov 22 08:13:40 compute-0 podman[171808]: 2025-11-22 08:13:40.313301513 +0000 UTC m=+0.528924511 container start 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:13:40 compute-0 podman[171808]: multipathd
Nov 22 08:13:40 compute-0 sudo[171828]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 08:13:40 compute-0 sudo[171828]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:13:40 compute-0 systemd[1]: Started multipathd container.
Nov 22 08:13:40 compute-0 sudo[171828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 08:13:40 compute-0 sudo[171766]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:40 compute-0 multipathd[171822]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:13:40 compute-0 multipathd[171822]: INFO:__main__:Validating config file
Nov 22 08:13:40 compute-0 multipathd[171822]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:13:40 compute-0 multipathd[171822]: INFO:__main__:Writing out command to execute
Nov 22 08:13:40 compute-0 podman[171829]: 2025-11-22 08:13:40.396741399 +0000 UTC m=+0.068907890 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:13:40 compute-0 sudo[171828]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:40 compute-0 multipathd[171822]: ++ cat /run_command
Nov 22 08:13:40 compute-0 systemd[1]: 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878-19f5312898e5c398.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:13:40 compute-0 multipathd[171822]: + CMD='/usr/sbin/multipathd -d'
Nov 22 08:13:40 compute-0 multipathd[171822]: + ARGS=
Nov 22 08:13:40 compute-0 multipathd[171822]: + sudo kolla_copy_cacerts
Nov 22 08:13:40 compute-0 systemd[1]: 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878-19f5312898e5c398.service: Failed with result 'exit-code'.
Nov 22 08:13:40 compute-0 sudo[171859]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 08:13:40 compute-0 sudo[171859]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:13:40 compute-0 sudo[171859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 08:13:40 compute-0 sudo[171859]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:40 compute-0 multipathd[171822]: + [[ ! -n '' ]]
Nov 22 08:13:40 compute-0 multipathd[171822]: + . kolla_extend_start
Nov 22 08:13:40 compute-0 multipathd[171822]: Running command: '/usr/sbin/multipathd -d'
Nov 22 08:13:40 compute-0 multipathd[171822]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 08:13:40 compute-0 multipathd[171822]: + umask 0022
Nov 22 08:13:40 compute-0 multipathd[171822]: + exec /usr/sbin/multipathd -d
Nov 22 08:13:40 compute-0 multipathd[171822]: 4334.078607 | --------start up--------
Nov 22 08:13:40 compute-0 multipathd[171822]: 4334.078628 | read /etc/multipath.conf
Nov 22 08:13:40 compute-0 multipathd[171822]: 4334.084175 | path checkers start up
Nov 22 08:13:40 compute-0 python3.9[172012]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:13:41 compute-0 sudo[172164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjpcavbjytjmlmxhqtwjswjzokloulsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799221.1527858-586-29645231876119/AnsiballZ_command.py'
Nov 22 08:13:41 compute-0 sudo[172164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:41 compute-0 python3.9[172166]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:13:41 compute-0 sudo[172164]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:42 compute-0 sudo[172328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmbhgnszkrbevljecwcjaubfbyipqjgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799221.8172662-594-176589268460475/AnsiballZ_systemd.py'
Nov 22 08:13:42 compute-0 sudo[172328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:42 compute-0 python3.9[172330]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:13:42 compute-0 systemd[1]: Stopping multipathd container...
Nov 22 08:13:42 compute-0 multipathd[171822]: 4336.183440 | exit (signal)
Nov 22 08:13:42 compute-0 multipathd[171822]: 4336.183495 | --------shut down-------
Nov 22 08:13:42 compute-0 systemd[1]: libpod-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope: Deactivated successfully.
Nov 22 08:13:42 compute-0 podman[172334]: 2025-11-22 08:13:42.568206363 +0000 UTC m=+0.071681234 container died 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 22 08:13:42 compute-0 systemd[1]: 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878-19f5312898e5c398.timer: Deactivated successfully.
Nov 22 08:13:42 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.
Nov 22 08:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878-userdata-shm.mount: Deactivated successfully.
Nov 22 08:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-529f7bb9019580c57fbafbb5bc773eb609bb2fac9b918d4f6e98c55f5db340eb-merged.mount: Deactivated successfully.
Nov 22 08:13:42 compute-0 podman[172334]: 2025-11-22 08:13:42.634178754 +0000 UTC m=+0.137653615 container cleanup 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:13:42 compute-0 podman[172334]: multipathd
Nov 22 08:13:42 compute-0 podman[172361]: multipathd
Nov 22 08:13:42 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 22 08:13:42 compute-0 systemd[1]: Stopped multipathd container.
Nov 22 08:13:42 compute-0 systemd[1]: Starting multipathd container...
Nov 22 08:13:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529f7bb9019580c57fbafbb5bc773eb609bb2fac9b918d4f6e98c55f5db340eb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 08:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529f7bb9019580c57fbafbb5bc773eb609bb2fac9b918d4f6e98c55f5db340eb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 08:13:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.
Nov 22 08:13:42 compute-0 podman[172372]: 2025-11-22 08:13:42.8150986 +0000 UTC m=+0.096585621 container init 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 22 08:13:42 compute-0 multipathd[172388]: + sudo -E kolla_set_configs
Nov 22 08:13:42 compute-0 sudo[172394]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 08:13:42 compute-0 podman[172372]: 2025-11-22 08:13:42.849184801 +0000 UTC m=+0.130671852 container start 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:13:42 compute-0 sudo[172394]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:13:42 compute-0 sudo[172394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 08:13:42 compute-0 podman[172372]: multipathd
Nov 22 08:13:42 compute-0 systemd[1]: Started multipathd container.
Nov 22 08:13:42 compute-0 multipathd[172388]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:13:42 compute-0 sudo[172328]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:42 compute-0 multipathd[172388]: INFO:__main__:Validating config file
Nov 22 08:13:42 compute-0 multipathd[172388]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:13:42 compute-0 multipathd[172388]: INFO:__main__:Writing out command to execute
Nov 22 08:13:42 compute-0 sudo[172394]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:42 compute-0 multipathd[172388]: ++ cat /run_command
Nov 22 08:13:42 compute-0 multipathd[172388]: + CMD='/usr/sbin/multipathd -d'
Nov 22 08:13:42 compute-0 multipathd[172388]: + ARGS=
Nov 22 08:13:42 compute-0 multipathd[172388]: + sudo kolla_copy_cacerts
Nov 22 08:13:42 compute-0 sudo[172407]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 08:13:42 compute-0 sudo[172407]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:13:42 compute-0 sudo[172407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 08:13:42 compute-0 sudo[172407]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:42 compute-0 multipathd[172388]: + [[ ! -n '' ]]
Nov 22 08:13:42 compute-0 multipathd[172388]: + . kolla_extend_start
Nov 22 08:13:42 compute-0 multipathd[172388]: Running command: '/usr/sbin/multipathd -d'
Nov 22 08:13:42 compute-0 multipathd[172388]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 08:13:42 compute-0 multipathd[172388]: + umask 0022
Nov 22 08:13:42 compute-0 multipathd[172388]: + exec /usr/sbin/multipathd -d
Nov 22 08:13:42 compute-0 multipathd[172388]: 4336.592781 | --------start up--------
Nov 22 08:13:42 compute-0 multipathd[172388]: 4336.592795 | read /etc/multipath.conf
Nov 22 08:13:42 compute-0 multipathd[172388]: 4336.598692 | path checkers start up
Nov 22 08:13:42 compute-0 podman[172395]: 2025-11-22 08:13:42.975615104 +0000 UTC m=+0.105787345 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 08:13:42 compute-0 systemd[1]: 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878-6a88bb1c54548f22.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:13:42 compute-0 systemd[1]: 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878-6a88bb1c54548f22.service: Failed with result 'exit-code'.
Nov 22 08:13:43 compute-0 sudo[172583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqzhkuonlyeaicensxgbxdixnqoxzcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799223.0517135-602-32882954990299/AnsiballZ_file.py'
Nov 22 08:13:43 compute-0 sudo[172583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:43 compute-0 podman[172548]: 2025-11-22 08:13:43.3844978 +0000 UTC m=+0.064809685 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:13:43 compute-0 python3.9[172593]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:43 compute-0 sudo[172583]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:44 compute-0 sudo[172745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-expogbcqiyipyirhtphdxegdulofofbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799223.8984678-614-180055928163251/AnsiballZ_file.py'
Nov 22 08:13:44 compute-0 sudo[172745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:44 compute-0 python3.9[172747]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 08:13:44 compute-0 sudo[172745]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:44 compute-0 sudo[172897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elkwwddyfrdjykeewjssdwmizupyfpgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799224.5505767-622-115359235680449/AnsiballZ_modprobe.py'
Nov 22 08:13:44 compute-0 sudo[172897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:45 compute-0 python3.9[172899]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 22 08:13:45 compute-0 kernel: Key type psk registered
Nov 22 08:13:45 compute-0 sudo[172897]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:45 compute-0 sudo[173059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzepjwopqaegohhlcktyggqfphaqmrqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799225.2757242-630-165082228604963/AnsiballZ_stat.py'
Nov 22 08:13:45 compute-0 sudo[173059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:45 compute-0 python3.9[173061]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:13:45 compute-0 sudo[173059]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:46 compute-0 sudo[173182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bausitnxfebtndywiwsxzvrqwhlgsaju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799225.2757242-630-165082228604963/AnsiballZ_copy.py'
Nov 22 08:13:46 compute-0 sudo[173182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:46 compute-0 python3.9[173184]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799225.2757242-630-165082228604963/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:46 compute-0 sudo[173182]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:46 compute-0 sudo[173334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvgnvuuhdlexypmkazrvtzoomvfimuxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799226.511561-646-12962485441776/AnsiballZ_lineinfile.py'
Nov 22 08:13:46 compute-0 sudo[173334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:47 compute-0 python3.9[173336]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:47 compute-0 sudo[173334]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:47 compute-0 sudo[173486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isgdusdpswrcfqchsleowinwkcawnxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799227.2783902-654-270555651211679/AnsiballZ_systemd.py'
Nov 22 08:13:47 compute-0 sudo[173486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:47 compute-0 python3.9[173488]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:13:47 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 08:13:47 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 22 08:13:47 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 22 08:13:47 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 08:13:47 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 08:13:47 compute-0 sudo[173486]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:48 compute-0 sudo[173642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqepojzixqydyfnmbqpkptzkcdcgeote ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799228.206939-662-212453431331212/AnsiballZ_dnf.py'
Nov 22 08:13:48 compute-0 sudo[173642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:48 compute-0 python3.9[173644]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:13:51 compute-0 systemd[1]: Reloading.
Nov 22 08:13:51 compute-0 systemd-rc-local-generator[173693]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:51 compute-0 systemd-sysv-generator[173696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:51 compute-0 podman[173651]: 2025-11-22 08:13:51.82099084 +0000 UTC m=+0.104114616 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 08:13:51 compute-0 systemd[1]: Reloading.
Nov 22 08:13:52 compute-0 systemd-rc-local-generator[173738]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:52 compute-0 systemd-sysv-generator[173742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:52 compute-0 systemd-logind[826]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 08:13:52 compute-0 systemd-logind[826]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 08:13:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 08:13:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 08:13:52 compute-0 systemd[1]: Reloading.
Nov 22 08:13:52 compute-0 systemd-sysv-generator[173837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:52 compute-0 systemd-rc-local-generator[173834]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 08:13:53 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 22 08:13:53 compute-0 sudo[173642]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:53 compute-0 sudo[175121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyifkmqerigcnsofsgjlcxcxhtqniyxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799233.6793694-670-217487598745825/AnsiballZ_systemd_service.py'
Nov 22 08:13:53 compute-0 sudo[175121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:53 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 08:13:53 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 08:13:53 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.470s CPU time.
Nov 22 08:13:53 compute-0 systemd[1]: run-rfcd0537c945a449684cefea17b032162.service: Deactivated successfully.
Nov 22 08:13:54 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 08:13:54 compute-0 python3.9[175123]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:13:54 compute-0 iscsid[163481]: iscsid shutting down.
Nov 22 08:13:54 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 22 08:13:54 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 22 08:13:54 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 22 08:13:54 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 08:13:54 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 22 08:13:54 compute-0 systemd[1]: Started Open-iSCSI.
Nov 22 08:13:54 compute-0 sudo[175121]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:55 compute-0 python3.9[175280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:13:55 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 22 08:13:55 compute-0 sudo[175435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvrhlkqgexfbuwyeqsexiysrcmgaumya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799235.5588958-688-237481608317425/AnsiballZ_file.py'
Nov 22 08:13:55 compute-0 sudo[175435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:56 compute-0 python3.9[175437]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:13:56 compute-0 sudo[175435]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:56 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 08:13:56 compute-0 sudo[175588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqebhcgbscjkfmakxeqhenbbaldrprjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799236.4433715-699-24958442567123/AnsiballZ_systemd_service.py'
Nov 22 08:13:56 compute-0 sudo[175588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:13:57 compute-0 python3.9[175590]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:13:57 compute-0 systemd[1]: Reloading.
Nov 22 08:13:57 compute-0 systemd-sysv-generator[175623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:13:57 compute-0 systemd-rc-local-generator[175619]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:13:57 compute-0 sudo[175588]: pam_unix(sudo:session): session closed for user root
Nov 22 08:13:58 compute-0 python3.9[175775]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:13:58 compute-0 network[175792]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:13:58 compute-0 network[175793]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:13:58 compute-0 network[175794]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:14:01 compute-0 sudo[176066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzenewijrujpfixoormqfjzootjsqhig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799241.582679-718-86347632861366/AnsiballZ_systemd_service.py'
Nov 22 08:14:01 compute-0 sudo[176066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:02 compute-0 python3.9[176068]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:02 compute-0 sudo[176066]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:02 compute-0 sudo[176219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxyvxgxcluusvwznvpeztybiedgtgrqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799242.5011852-718-89427485353953/AnsiballZ_systemd_service.py'
Nov 22 08:14:02 compute-0 sudo[176219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:03 compute-0 python3.9[176221]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:03 compute-0 sudo[176219]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:03 compute-0 sudo[176372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vptkcmdscpibtzippgzfifegwyajecgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799243.3476284-718-5611258730214/AnsiballZ_systemd_service.py'
Nov 22 08:14:03 compute-0 sudo[176372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:03 compute-0 python3.9[176374]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:03 compute-0 sudo[176372]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:04 compute-0 sudo[176525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cngsgrcgusfmokmxoutwrkskgoewgmng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799244.1121695-718-255247117175263/AnsiballZ_systemd_service.py'
Nov 22 08:14:04 compute-0 sudo[176525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:04 compute-0 python3.9[176527]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:04 compute-0 sudo[176525]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:05 compute-0 sudo[176678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avefnycdwrzrgyzuuagakjwxelxrtzlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799244.82382-718-78120663224115/AnsiballZ_systemd_service.py'
Nov 22 08:14:05 compute-0 sudo[176678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:05 compute-0 python3.9[176680]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:05 compute-0 sudo[176678]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:05 compute-0 sudo[176831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yujoxciqoxlfieogoaefpogjesxlsdkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799245.5736494-718-140265984401871/AnsiballZ_systemd_service.py'
Nov 22 08:14:05 compute-0 sudo[176831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:06 compute-0 python3.9[176833]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:06 compute-0 sudo[176831]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:06 compute-0 sudo[176984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rakcbznjqygeocinxuslnskxjavqbaqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799246.310899-718-264144882325163/AnsiballZ_systemd_service.py'
Nov 22 08:14:06 compute-0 sudo[176984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:06 compute-0 python3.9[176986]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:07 compute-0 sudo[176984]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:07 compute-0 sudo[177137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlllvegebalplvkdeeeftorroavhkfzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799247.1632495-718-111407400298192/AnsiballZ_systemd_service.py'
Nov 22 08:14:07 compute-0 sudo[177137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:07 compute-0 python3.9[177139]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:14:07 compute-0 sudo[177137]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:08 compute-0 sudo[177290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fteuohnnqobbdimqeerluvdohhxilxse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799248.068102-777-100704643911886/AnsiballZ_file.py'
Nov 22 08:14:08 compute-0 sudo[177290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:08 compute-0 python3.9[177292]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:08 compute-0 sudo[177290]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:08 compute-0 sudo[177442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gttxqqegeiadcdxbshfmzlnbodlsehtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799248.729553-777-15112771667072/AnsiballZ_file.py'
Nov 22 08:14:08 compute-0 sudo[177442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:09 compute-0 python3.9[177444]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:09 compute-0 sudo[177442]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:09 compute-0 sudo[177594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovdekkrehtalrwuazkpdapqplyzbymak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799249.3164344-777-171761546768983/AnsiballZ_file.py'
Nov 22 08:14:09 compute-0 sudo[177594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:09 compute-0 python3.9[177596]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:09 compute-0 sudo[177594]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:14:09.942 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:14:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:14:09.944 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:14:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:14:09.944 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:14:10 compute-0 sudo[177746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opesdziuugvdfdszlnmbgefcoinadgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799249.8890004-777-63943179115991/AnsiballZ_file.py'
Nov 22 08:14:10 compute-0 sudo[177746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:10 compute-0 python3.9[177748]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:10 compute-0 sudo[177746]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:10 compute-0 sudo[177898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efqqmeseuobuhxjfjhnuccqhlrcdahbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799250.5145612-777-85278628186312/AnsiballZ_file.py'
Nov 22 08:14:10 compute-0 sudo[177898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:10 compute-0 python3.9[177900]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:11 compute-0 sudo[177898]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:11 compute-0 sudo[178050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajtepzgfrkpkvmkhvchtyjygawkvexxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799251.1482477-777-21634076748695/AnsiballZ_file.py'
Nov 22 08:14:11 compute-0 sudo[178050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:11 compute-0 python3.9[178052]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:11 compute-0 sudo[178050]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:12 compute-0 sudo[178202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aicyrnxaeljsivldjuldkgdaeciaowgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799251.8135886-777-143386692806487/AnsiballZ_file.py'
Nov 22 08:14:12 compute-0 sudo[178202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:12 compute-0 python3.9[178204]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:12 compute-0 sudo[178202]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:12 compute-0 sudo[178354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhbbmonapuufboqujhqcpibagvcvmgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799252.4792798-777-236664717120423/AnsiballZ_file.py'
Nov 22 08:14:12 compute-0 sudo[178354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:12 compute-0 python3.9[178356]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:12 compute-0 sudo[178354]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:13 compute-0 podman[178365]: 2025-11-22 08:14:13.155328488 +0000 UTC m=+0.092824474 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 22 08:14:13 compute-0 sudo[178542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roytfmyvxopkjiyydxpjmnvzsblgwbiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799253.15454-834-40511056376916/AnsiballZ_file.py'
Nov 22 08:14:13 compute-0 podman[178499]: 2025-11-22 08:14:13.526822287 +0000 UTC m=+0.056075512 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 08:14:13 compute-0 sudo[178542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:13 compute-0 python3.9[178547]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:13 compute-0 sudo[178542]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:14 compute-0 sudo[178697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtyzmnrhrfmzdbahmywkbmqjzlglxmwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799253.9680328-834-174989831186818/AnsiballZ_file.py'
Nov 22 08:14:14 compute-0 sudo[178697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:14 compute-0 python3.9[178699]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:14 compute-0 sudo[178697]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:15 compute-0 sudo[178849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aliwnpiqriotptoysllfqghhmolhyegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799254.745222-834-152537091137492/AnsiballZ_file.py'
Nov 22 08:14:15 compute-0 sudo[178849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:15 compute-0 python3.9[178851]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:15 compute-0 sudo[178849]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:15 compute-0 sudo[179001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadlotgtbwqslaecclthimylpsfxbpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799255.3548338-834-254408614433300/AnsiballZ_file.py'
Nov 22 08:14:15 compute-0 sudo[179001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:15 compute-0 python3.9[179003]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:15 compute-0 sudo[179001]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:16 compute-0 sudo[179153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtnhbphvsgqjhtlqglcbpfoiwltlrboi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799255.965631-834-198554871308375/AnsiballZ_file.py'
Nov 22 08:14:16 compute-0 sudo[179153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:16 compute-0 python3.9[179155]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:16 compute-0 sudo[179153]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:16 compute-0 sudo[179305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjlebkhfsfynttawbiowzjlzsxawkuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799256.642237-834-123653097253875/AnsiballZ_file.py'
Nov 22 08:14:16 compute-0 sudo[179305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:17 compute-0 python3.9[179307]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:17 compute-0 sudo[179305]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:17 compute-0 sudo[179457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mizgmdgbckqdkatwqnncnuklnpugwldf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799257.2853138-834-158579088158508/AnsiballZ_file.py'
Nov 22 08:14:17 compute-0 sudo[179457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:17 compute-0 python3.9[179459]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:17 compute-0 sudo[179457]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:18 compute-0 sudo[179609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyrhsyccqowyajpmehidbvdkehvsvmke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799257.957368-834-188642438553025/AnsiballZ_file.py'
Nov 22 08:14:18 compute-0 sudo[179609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:18 compute-0 python3.9[179611]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:18 compute-0 sudo[179609]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:18 compute-0 sudo[179761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtjlsiboqgxkfjtirecibrtohhvgctn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799258.6365454-892-97460648698129/AnsiballZ_command.py'
Nov 22 08:14:18 compute-0 sudo[179761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:19 compute-0 python3.9[179763]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:19 compute-0 sudo[179761]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:19 compute-0 python3.9[179915]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:14:20 compute-0 sudo[180065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxaididxegkyfltgifagizigjwgnujp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799260.2306404-910-104194434251263/AnsiballZ_systemd_service.py'
Nov 22 08:14:20 compute-0 sudo[180065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:20 compute-0 python3.9[180067]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:14:20 compute-0 systemd[1]: Reloading.
Nov 22 08:14:20 compute-0 systemd-rc-local-generator[180095]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:14:20 compute-0 systemd-sysv-generator[180098]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:14:21 compute-0 sudo[180065]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:21 compute-0 sudo[180252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txiynxmvhlgazrtgfbunavkxgfdlurtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799261.2732859-918-68430495717633/AnsiballZ_command.py'
Nov 22 08:14:21 compute-0 sudo[180252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:21 compute-0 python3.9[180254]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:21 compute-0 sudo[180252]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:22 compute-0 sudo[180421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtwyspokkqcppzwksuvynedeizttokoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799261.8683171-918-139710409857173/AnsiballZ_command.py'
Nov 22 08:14:22 compute-0 sudo[180421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:22 compute-0 podman[180359]: 2025-11-22 08:14:22.131479156 +0000 UTC m=+0.085287689 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 22 08:14:22 compute-0 python3.9[180428]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:22 compute-0 sudo[180421]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:22 compute-0 sudo[180584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nobhnjoagxsfruzqcjqxcuwuukcuvndm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799262.4607909-918-50653159723765/AnsiballZ_command.py'
Nov 22 08:14:22 compute-0 sudo[180584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:22 compute-0 python3.9[180586]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:22 compute-0 sudo[180584]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:23 compute-0 sudo[180737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvqfggzooiwmswudiryhnoczlltbakla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799263.0530698-918-281188103222540/AnsiballZ_command.py'
Nov 22 08:14:23 compute-0 sudo[180737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:23 compute-0 python3.9[180739]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:23 compute-0 sudo[180737]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:23 compute-0 sudo[180890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfvsmljinlzoyujaydqmbxvrzqaxmvyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799263.6323197-918-230023619198456/AnsiballZ_command.py'
Nov 22 08:14:23 compute-0 sudo[180890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:24 compute-0 python3.9[180892]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:24 compute-0 sudo[180890]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:24 compute-0 sudo[181043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcnocnpxjztmfnvlohhnqtkfpqhmcfsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799264.268124-918-209633623270865/AnsiballZ_command.py'
Nov 22 08:14:24 compute-0 sudo[181043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:24 compute-0 python3.9[181045]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:24 compute-0 sudo[181043]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:25 compute-0 sudo[181196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vliysrhzmexhuzxppzpkenaplxoehmvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799264.9863777-918-233028442301583/AnsiballZ_command.py'
Nov 22 08:14:25 compute-0 sudo[181196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:25 compute-0 python3.9[181198]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:25 compute-0 sudo[181196]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:25 compute-0 sudo[181349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ityqftrcokqpvheieqwryybtzykmzklh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799265.5622826-918-27403878308898/AnsiballZ_command.py'
Nov 22 08:14:25 compute-0 sudo[181349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:26 compute-0 python3.9[181351]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:14:26 compute-0 sudo[181349]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:27 compute-0 sudo[181502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkldxbhvkqtmhqqyhgnsnikeywameitt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799266.9501998-997-17074268301915/AnsiballZ_file.py'
Nov 22 08:14:27 compute-0 sudo[181502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:27 compute-0 python3.9[181504]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:27 compute-0 sudo[181502]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:27 compute-0 sudo[181654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okqjtlfkuczaxrahoxxzmjbuixrdwqfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799267.576302-997-32601865314594/AnsiballZ_file.py'
Nov 22 08:14:27 compute-0 sudo[181654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:28 compute-0 python3.9[181656]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:28 compute-0 sudo[181654]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:28 compute-0 sudo[181806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixdjvfctciqzfxwvdvcqtdnhdsphklcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799268.154675-997-74184361194946/AnsiballZ_file.py'
Nov 22 08:14:28 compute-0 sudo[181806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:28 compute-0 python3.9[181808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:28 compute-0 sudo[181806]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:29 compute-0 sudo[181958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udxkodsollwpzgznnclamsvcvuaefatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799268.8417964-1019-13800158821897/AnsiballZ_file.py'
Nov 22 08:14:29 compute-0 sudo[181958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:29 compute-0 python3.9[181960]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:29 compute-0 sudo[181958]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:29 compute-0 sudo[182110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbkmplfbhqwiujihodjvedhphpsanprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799269.6287775-1019-58248016007926/AnsiballZ_file.py'
Nov 22 08:14:29 compute-0 sudo[182110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:30 compute-0 python3.9[182112]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:30 compute-0 sudo[182110]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:30 compute-0 sudo[182262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ralvadeuuygexdaktemnivrniqblgpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799270.2679996-1019-4984508817067/AnsiballZ_file.py'
Nov 22 08:14:30 compute-0 sudo[182262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:30 compute-0 python3.9[182264]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:30 compute-0 sudo[182262]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:31 compute-0 sudo[182414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hygmzywfmfsdruyeyvmjnvvgmqoxxjow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799270.8714106-1019-186597676154526/AnsiballZ_file.py'
Nov 22 08:14:31 compute-0 sudo[182414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:31 compute-0 python3.9[182416]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:31 compute-0 sudo[182414]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:31 compute-0 sudo[182566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ordswksrvxnmayrmmqjfnefducezdion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799271.4487019-1019-99812887609248/AnsiballZ_file.py'
Nov 22 08:14:31 compute-0 sudo[182566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:31 compute-0 python3.9[182568]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:31 compute-0 sudo[182566]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:32 compute-0 sudo[182718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxjusvxfgoqnfjwyehryucctjkzawvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799272.1190987-1019-190776357008632/AnsiballZ_file.py'
Nov 22 08:14:32 compute-0 sudo[182718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:32 compute-0 python3.9[182720]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:32 compute-0 sudo[182718]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:33 compute-0 sudo[182870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-expnygtzyiakjnuuwjguowizfhzigocb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799272.739962-1019-195005419206748/AnsiballZ_file.py'
Nov 22 08:14:33 compute-0 sudo[182870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:33 compute-0 python3.9[182872]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:33 compute-0 sudo[182870]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:37 compute-0 sudo[183022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bayakmattgswnudombnguhnfxjzhftle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799277.282949-1188-144716389891181/AnsiballZ_getent.py'
Nov 22 08:14:37 compute-0 sudo[183022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:37 compute-0 python3.9[183024]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 22 08:14:37 compute-0 sudo[183022]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:38 compute-0 sudo[183175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiovtuiwfuhejnqqbvdyhiftuaueoglg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799278.1435962-1196-34218881477076/AnsiballZ_group.py'
Nov 22 08:14:38 compute-0 sudo[183175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:38 compute-0 python3.9[183177]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 08:14:39 compute-0 groupadd[183178]: group added to /etc/group: name=nova, GID=42436
Nov 22 08:14:39 compute-0 groupadd[183178]: group added to /etc/gshadow: name=nova
Nov 22 08:14:39 compute-0 groupadd[183178]: new group: name=nova, GID=42436
Nov 22 08:14:39 compute-0 sudo[183175]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:39 compute-0 sudo[183333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvdngouommhnqrlclujqiajtezfxazq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799279.341372-1204-29311394217202/AnsiballZ_user.py'
Nov 22 08:14:39 compute-0 sudo[183333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:40 compute-0 python3.9[183335]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 08:14:40 compute-0 useradd[183337]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 22 08:14:40 compute-0 useradd[183337]: add 'nova' to group 'libvirt'
Nov 22 08:14:40 compute-0 useradd[183337]: add 'nova' to shadow group 'libvirt'
Nov 22 08:14:40 compute-0 sudo[183333]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:41 compute-0 sshd-session[183368]: Accepted publickey for zuul from 192.168.122.30 port 39478 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:14:41 compute-0 systemd-logind[826]: New session 25 of user zuul.
Nov 22 08:14:41 compute-0 systemd[1]: Started Session 25 of User zuul.
Nov 22 08:14:41 compute-0 sshd-session[183368]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:14:41 compute-0 sshd-session[183371]: Received disconnect from 192.168.122.30 port 39478:11: disconnected by user
Nov 22 08:14:41 compute-0 sshd-session[183371]: Disconnected from user zuul 192.168.122.30 port 39478
Nov 22 08:14:41 compute-0 sshd-session[183368]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:14:41 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Nov 22 08:14:41 compute-0 systemd-logind[826]: Session 25 logged out. Waiting for processes to exit.
Nov 22 08:14:41 compute-0 systemd-logind[826]: Removed session 25.
Nov 22 08:14:42 compute-0 python3.9[183521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:42 compute-0 python3.9[183642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799281.5463939-1229-13002898500914/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:43 compute-0 python3.9[183792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:43 compute-0 podman[183793]: 2025-11-22 08:14:43.273930064 +0000 UTC m=+0.068958901 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 22 08:14:43 compute-0 python3.9[183889]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:43 compute-0 podman[183890]: 2025-11-22 08:14:43.6938276 +0000 UTC m=+0.043836005 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 08:14:44 compute-0 python3.9[184059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:44 compute-0 python3.9[184180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799283.779099-1229-131028149633902/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:45 compute-0 python3.9[184330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:46 compute-0 python3.9[184451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799285.1036475-1229-97255516157029/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:46 compute-0 python3.9[184601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:47 compute-0 python3.9[184722]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799286.2465653-1229-100314092608474/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:47 compute-0 python3.9[184872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:48 compute-0 python3.9[184993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799287.3377893-1229-258810227560162/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:49 compute-0 sudo[185143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxvxjjduvnaccbvamxefkivfbnfqttnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799288.7262087-1312-29649616919625/AnsiballZ_file.py'
Nov 22 08:14:49 compute-0 sudo[185143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:49 compute-0 python3.9[185145]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:49 compute-0 sudo[185143]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:49 compute-0 sudo[185295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgmkwolwkniubskvdhhbbjivbzojeflj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799289.4368768-1320-255912609229614/AnsiballZ_copy.py'
Nov 22 08:14:49 compute-0 sudo[185295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:50 compute-0 python3.9[185297]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:14:50 compute-0 sudo[185295]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:50 compute-0 sudo[185447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxvpximkkcytewukrevgczdeqfvwsjdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799290.229148-1328-156306515685023/AnsiballZ_stat.py'
Nov 22 08:14:50 compute-0 sudo[185447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:50 compute-0 python3.9[185449]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:14:50 compute-0 sudo[185447]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:51 compute-0 sudo[185599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhezhjzcnkhavempkvgwwvzmudlmxxhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799290.9121227-1336-224829818683778/AnsiballZ_stat.py'
Nov 22 08:14:51 compute-0 sudo[185599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:51 compute-0 python3.9[185601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:51 compute-0 sudo[185599]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:51 compute-0 sudo[185722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcrqgcgkrdetleqidqxairsapyoolntq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799290.9121227-1336-224829818683778/AnsiballZ_copy.py'
Nov 22 08:14:51 compute-0 sudo[185722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:51 compute-0 python3.9[185724]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763799290.9121227-1336-224829818683778/.source _original_basename=.x1h8gckp follow=False checksum=6dea40df71ce88d3599f89074f7df8491e91e381 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 22 08:14:52 compute-0 sudo[185722]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:52 compute-0 podman[185850]: 2025-11-22 08:14:52.849699327 +0000 UTC m=+0.096685807 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 08:14:52 compute-0 python3.9[185888]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:14:53 compute-0 python3.9[186053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:54 compute-0 python3.9[186174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799293.1729157-1362-210896361497383/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:54 compute-0 python3.9[186324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:14:55 compute-0 python3.9[186445]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799294.4139473-1377-63292149271421/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:14:55 compute-0 sudo[186595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaorzugkvsltwhgtlvogpawscuxdscsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799295.6538112-1394-238346336305046/AnsiballZ_container_config_data.py'
Nov 22 08:14:55 compute-0 sudo[186595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:56 compute-0 python3.9[186597]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 22 08:14:56 compute-0 sudo[186595]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:56 compute-0 sudo[186747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-truvzsrtoqitsuvmkthesezabeyldhdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799296.3093066-1403-238572549124234/AnsiballZ_container_config_hash.py'
Nov 22 08:14:56 compute-0 sudo[186747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:56 compute-0 python3.9[186749]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:14:56 compute-0 sudo[186747]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:57 compute-0 sudo[186899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjmdwcdehykhcdpeieqidkzylbixtug ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799297.0978293-1413-143258893386323/AnsiballZ_edpm_container_manage.py'
Nov 22 08:14:57 compute-0 sudo[186899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:57 compute-0 python3[186901]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:14:57 compute-0 podman[186938]: 2025-11-22 08:14:57.85010994 +0000 UTC m=+0.044455553 container create bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=nova_compute_init, managed_by=edpm_ansible)
Nov 22 08:14:57 compute-0 podman[186938]: 2025-11-22 08:14:57.8258752 +0000 UTC m=+0.020220833 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 08:14:57 compute-0 python3[186901]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 22 08:14:57 compute-0 sudo[186899]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:58 compute-0 sudo[187124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbtjhonyunetjkcawzefxggerlfqicpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799298.1739776-1421-168634041170113/AnsiballZ_stat.py'
Nov 22 08:14:58 compute-0 sudo[187124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:58 compute-0 python3.9[187126]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:14:58 compute-0 sudo[187124]: pam_unix(sudo:session): session closed for user root
Nov 22 08:14:59 compute-0 sudo[187278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhbuvtadvwikotrfibfkmritantdvuug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799299.1120377-1433-30440628825488/AnsiballZ_container_config_data.py'
Nov 22 08:14:59 compute-0 sudo[187278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:14:59 compute-0 python3.9[187280]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 22 08:14:59 compute-0 sudo[187278]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:00 compute-0 sudo[187430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujrgvacxgzsjbluzakmsaarkdalmyiyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799299.843273-1442-159981486191097/AnsiballZ_container_config_hash.py'
Nov 22 08:15:00 compute-0 sudo[187430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:00 compute-0 python3.9[187432]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:15:00 compute-0 sudo[187430]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:00 compute-0 sudo[187582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkudlyowladpuokeuxvnvumdeimndnya ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799300.6360652-1452-194834274007226/AnsiballZ_edpm_container_manage.py'
Nov 22 08:15:00 compute-0 sudo[187582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:01 compute-0 python3[187584]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:15:01 compute-0 podman[187619]: 2025-11-22 08:15:01.359897899 +0000 UTC m=+0.053040547 container create ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:15:01 compute-0 podman[187619]: 2025-11-22 08:15:01.331074892 +0000 UTC m=+0.024217560 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 08:15:01 compute-0 python3[187584]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 22 08:15:01 compute-0 sudo[187582]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:01 compute-0 sudo[187807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjsrkvfgdanmvrjaemgonptllhqdtoth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799301.6342323-1460-248891197154383/AnsiballZ_stat.py'
Nov 22 08:15:01 compute-0 sudo[187807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:02 compute-0 python3.9[187809]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:15:02 compute-0 sudo[187807]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:02 compute-0 sudo[187961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpweauzadskrrvvrxmfujsjsiuoxfcih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799302.395977-1469-162481747735242/AnsiballZ_file.py'
Nov 22 08:15:02 compute-0 sudo[187961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:02 compute-0 python3.9[187963]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:02 compute-0 sudo[187961]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:03 compute-0 sudo[188112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djhkrlpawosszzssrgjiynhojpnpsnay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799302.96542-1469-43660866350392/AnsiballZ_copy.py'
Nov 22 08:15:03 compute-0 sudo[188112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:03 compute-0 python3.9[188114]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799302.96542-1469-43660866350392/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:03 compute-0 sudo[188112]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:03 compute-0 sudo[188188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbhrmialbfrcvjzkkgnoftypmcytmdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799302.96542-1469-43660866350392/AnsiballZ_systemd.py'
Nov 22 08:15:03 compute-0 sudo[188188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:04 compute-0 python3.9[188190]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:15:04 compute-0 systemd[1]: Reloading.
Nov 22 08:15:04 compute-0 systemd-rc-local-generator[188218]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:15:04 compute-0 systemd-sysv-generator[188221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:15:04 compute-0 sudo[188188]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:04 compute-0 sudo[188299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-safyvicgwxwnpxsuubwjrdkmndhqeodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799302.96542-1469-43660866350392/AnsiballZ_systemd.py'
Nov 22 08:15:04 compute-0 sudo[188299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:05 compute-0 python3.9[188301]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:15:05 compute-0 systemd[1]: Reloading.
Nov 22 08:15:05 compute-0 systemd-rc-local-generator[188330]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:15:05 compute-0 systemd-sysv-generator[188333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:15:05 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 08:15:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:05 compute-0 podman[188341]: 2025-11-22 08:15:05.522224886 +0000 UTC m=+0.120913397 container init ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm)
Nov 22 08:15:05 compute-0 podman[188341]: 2025-11-22 08:15:05.528894378 +0000 UTC m=+0.127582869 container start ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:15:05 compute-0 nova_compute[188356]: + sudo -E kolla_set_configs
Nov 22 08:15:05 compute-0 podman[188341]: nova_compute
Nov 22 08:15:05 compute-0 systemd[1]: Started nova_compute container.
Nov 22 08:15:05 compute-0 sudo[188299]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Validating config file
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying service configuration files
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Deleting /etc/ceph
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Creating directory /etc/ceph
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Writing out command to execute
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:05 compute-0 nova_compute[188356]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:15:05 compute-0 nova_compute[188356]: ++ cat /run_command
Nov 22 08:15:05 compute-0 nova_compute[188356]: + CMD=nova-compute
Nov 22 08:15:05 compute-0 nova_compute[188356]: + ARGS=
Nov 22 08:15:05 compute-0 nova_compute[188356]: + sudo kolla_copy_cacerts
Nov 22 08:15:05 compute-0 nova_compute[188356]: + [[ ! -n '' ]]
Nov 22 08:15:05 compute-0 nova_compute[188356]: + . kolla_extend_start
Nov 22 08:15:05 compute-0 nova_compute[188356]: Running command: 'nova-compute'
Nov 22 08:15:05 compute-0 nova_compute[188356]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 08:15:05 compute-0 nova_compute[188356]: + umask 0022
Nov 22 08:15:05 compute-0 nova_compute[188356]: + exec nova-compute
Nov 22 08:15:06 compute-0 python3.9[188517]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:15:07 compute-0 python3.9[188668]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:15:07 compute-0 nova_compute[188356]: 2025-11-22 08:15:07.787 188360 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:15:07 compute-0 nova_compute[188356]: 2025-11-22 08:15:07.788 188360 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:15:07 compute-0 nova_compute[188356]: 2025-11-22 08:15:07.788 188360 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:15:07 compute-0 nova_compute[188356]: 2025-11-22 08:15:07.788 188360 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 22 08:15:07 compute-0 python3.9[188820]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:15:07 compute-0 nova_compute[188356]: 2025-11-22 08:15:07.924 188360 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:15:07 compute-0 nova_compute[188356]: 2025-11-22 08:15:07.946 188360 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:15:07 compute-0 nova_compute[188356]: 2025-11-22 08:15:07.946 188360 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 08:15:08 compute-0 sudo[188972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixabaipuhqbxpqcjkdhjxlocucvvxrzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799308.193553-1529-47229522224169/AnsiballZ_podman_container.py'
Nov 22 08:15:08 compute-0 sudo[188972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:08 compute-0 python3.9[188974]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 08:15:09 compute-0 sudo[188972]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:09 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:15:09 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.414 188360 INFO nova.virt.driver [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.511 188360 INFO nova.compute.provider_config [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.524 188360 DEBUG oslo_concurrency.lockutils [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.524 188360 DEBUG oslo_concurrency.lockutils [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.524 188360 DEBUG oslo_concurrency.lockutils [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.525 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.525 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.525 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.525 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.525 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.525 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.526 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.526 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.526 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.526 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.526 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.526 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.526 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.527 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.527 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.527 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.527 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.527 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.527 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.528 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.528 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.528 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.528 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.528 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.528 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.528 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.529 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.529 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.529 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.529 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.529 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.529 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.529 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.530 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.530 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.530 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.530 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.530 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.530 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.531 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.531 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.531 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.531 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.531 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.532 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.532 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.532 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.532 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.532 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.532 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.532 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.533 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.533 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.533 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.533 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.533 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.533 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.533 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.534 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.534 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.534 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.534 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.534 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.534 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.534 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.535 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.535 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.535 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.535 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.535 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.535 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.536 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.536 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.536 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.536 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.536 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.536 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.537 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.537 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.537 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.537 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.537 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.537 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.537 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.538 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.538 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.538 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.538 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.538 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.538 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.538 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.539 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.539 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.539 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.539 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.539 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.540 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.540 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.540 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.540 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.540 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.540 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.540 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.541 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.541 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.541 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.541 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.541 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.541 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.541 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.542 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.542 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.542 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.542 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.542 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.542 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.543 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.543 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.543 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.543 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.543 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.543 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.543 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.544 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.544 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.544 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.544 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.544 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.544 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.544 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.545 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.545 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.545 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.545 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.545 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.545 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.546 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.546 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.546 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.546 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.546 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.546 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.546 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.547 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.547 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.547 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.547 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.547 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.547 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.547 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.548 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.548 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.548 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.548 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.548 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.548 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.549 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.549 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.549 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.549 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.549 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.549 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.549 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.550 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.550 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.550 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.550 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.550 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.550 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.550 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.551 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.551 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.551 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.551 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.551 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.551 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.552 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.552 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.552 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.552 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.552 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.553 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.553 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.553 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.553 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.553 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.553 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.554 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.554 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.554 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.554 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.554 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.554 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.554 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.555 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.555 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.555 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.555 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.555 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.555 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.556 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.556 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.556 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.556 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.556 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.556 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.557 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.557 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.557 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.557 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.557 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.557 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.557 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.558 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.558 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.558 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.558 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.558 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.558 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.558 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.559 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.559 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.559 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.559 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.559 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.559 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.560 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.560 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.560 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.560 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.560 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.561 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.561 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.561 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.561 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.561 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.561 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.561 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.562 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.562 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.562 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.562 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.562 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.562 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.562 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.563 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.563 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.563 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.563 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.563 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.563 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.563 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.564 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.564 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.564 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.564 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.564 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.564 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.565 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.565 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.565 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.565 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.565 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.565 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.565 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.566 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.566 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.566 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.566 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.566 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.567 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.567 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.567 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.567 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.567 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.567 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.568 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.568 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.568 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.568 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.568 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.568 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.569 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.569 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.569 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.569 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.569 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.570 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.570 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.570 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.570 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.570 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.570 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.570 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.571 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.571 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.571 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.571 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.571 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.571 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.571 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.572 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.572 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.572 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.572 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.572 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.572 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.573 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.574 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.574 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.574 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.574 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.574 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.574 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.574 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.575 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.575 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.575 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.575 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.575 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.575 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.576 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.576 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.576 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.576 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.576 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.576 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.576 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.577 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.577 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.577 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.577 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.577 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.577 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.577 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.578 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.578 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.578 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.578 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.578 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.578 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.578 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.579 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.579 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.579 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.579 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.579 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.579 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.580 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.580 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.580 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.580 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.580 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.580 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.580 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.581 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.581 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.581 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.581 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.581 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.581 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.581 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.582 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.582 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.582 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.582 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.582 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.582 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.582 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.583 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.583 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.583 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.583 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.583 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.583 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.583 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.584 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.584 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.584 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.584 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.584 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.584 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.584 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.585 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.585 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.585 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.585 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.585 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.585 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.586 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.586 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.586 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.586 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.586 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.586 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.587 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.588 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.588 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.588 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.588 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.588 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.588 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.588 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.589 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.589 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.589 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.589 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.589 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.589 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.589 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.590 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.590 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.590 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.590 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.590 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.591 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.591 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.591 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.591 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.591 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.592 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.592 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.592 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.592 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.592 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.592 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 sudo[189146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzqgxjqqixixiryijokqrujokcducfhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799309.3484635-1537-137791544293479/AnsiballZ_systemd.py'
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.593 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.593 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.593 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.593 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.593 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.594 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.594 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.594 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.594 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.594 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.595 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.595 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.595 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.595 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.595 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 sudo[189146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.595 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.595 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.596 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.596 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.596 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.596 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.596 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.596 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.597 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.597 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.597 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.597 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.597 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.598 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.598 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.598 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.598 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.598 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.599 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.599 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.599 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.599 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.599 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.600 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.600 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.600 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.600 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.600 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.601 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.601 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.601 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.601 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.601 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.602 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.602 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.602 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.603 188360 WARNING oslo_config.cfg [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 08:15:09 compute-0 nova_compute[188356]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 08:15:09 compute-0 nova_compute[188356]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 08:15:09 compute-0 nova_compute[188356]: and ``live_migration_inbound_addr`` respectively.
Nov 22 08:15:09 compute-0 nova_compute[188356]: ).  Its value may be silently ignored in the future.
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.603 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.603 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.603 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.604 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.604 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.604 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.604 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.605 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.605 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.605 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.605 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.605 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.606 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.606 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.606 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.606 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.607 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.607 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.607 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.607 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.607 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.608 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.608 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.608 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.608 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.608 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.609 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.609 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.609 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.609 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.610 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.610 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.610 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.610 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.611 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.611 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.611 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.611 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.612 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.612 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.612 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.612 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.612 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.613 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.613 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.613 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.613 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.613 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.614 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.614 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.614 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.614 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.614 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.615 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.615 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.615 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.615 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.615 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.616 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.616 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.616 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.616 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.616 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.616 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.617 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.617 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.617 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.617 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.617 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.617 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.618 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.618 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.618 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.618 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.619 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.619 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.619 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.619 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.619 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.620 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.620 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.620 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.620 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.620 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.621 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.621 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.621 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.621 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.622 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.622 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.622 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.622 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.622 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.623 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.623 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.623 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.623 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.624 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.624 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.624 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.624 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.624 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.625 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.625 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.625 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.625 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.625 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.626 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.626 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.626 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.626 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.627 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.627 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.627 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.627 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.627 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.628 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.628 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.628 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.628 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.628 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.629 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.629 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.629 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.629 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.630 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.630 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.630 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.630 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.630 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.631 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.631 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.631 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.631 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.632 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.632 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.632 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.632 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.632 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.633 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.633 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.633 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.633 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.634 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.634 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.634 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.634 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.635 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.635 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.635 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.635 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.635 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.636 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.636 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.636 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.636 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.637 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.637 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.637 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.637 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.637 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.638 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.638 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.638 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.638 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.639 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.639 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.639 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.639 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.639 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.640 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.640 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.640 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.640 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.641 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.641 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.641 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.641 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.641 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.642 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.642 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.642 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.643 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.643 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.643 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.643 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.644 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.644 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.644 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.644 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.645 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.645 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.645 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.645 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.645 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.646 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.646 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.646 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.646 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.647 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.647 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.647 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.647 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.647 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.648 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.648 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.648 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.648 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.648 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.648 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.649 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.649 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.649 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.649 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.650 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.650 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.650 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.650 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.650 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.651 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.651 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.651 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.651 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.651 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.651 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.651 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.652 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.652 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.652 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.652 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.652 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.652 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.652 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.653 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.654 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.654 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.654 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.654 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.654 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.654 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.654 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.655 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.655 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.655 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.655 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.655 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.656 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.657 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.657 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.657 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.657 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.657 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.657 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.657 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.658 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.658 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.658 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.658 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.658 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.658 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.658 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.659 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.659 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.659 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.659 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.659 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.659 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.659 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.660 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.660 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.660 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.660 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.660 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.660 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.660 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.661 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.661 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.661 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.661 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.661 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.661 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.661 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.662 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.662 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.662 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.662 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.662 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.662 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.662 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.663 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.663 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.663 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.663 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.663 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.663 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.663 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.664 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.664 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.664 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.664 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.664 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.664 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.664 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.665 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.665 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.665 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.665 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.665 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.665 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.665 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.666 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.666 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.666 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.666 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.666 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.666 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.667 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.667 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.667 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.667 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.667 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.667 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.667 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.668 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.668 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.668 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.668 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.668 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.668 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.668 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.669 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.670 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.670 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.670 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.670 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.670 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.670 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.670 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.671 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.672 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.672 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.672 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.672 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.672 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.672 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.672 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.673 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.673 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.673 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.673 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.673 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.673 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.673 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.674 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.674 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.674 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.674 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.674 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.674 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.674 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.675 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.675 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.675 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.675 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.675 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.675 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.675 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.676 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.677 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.677 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.677 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.677 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.677 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.677 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.677 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.678 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.678 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.678 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.678 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.678 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.678 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.678 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.679 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.679 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.679 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.679 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.679 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.679 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.679 188360 DEBUG oslo_service.service [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.680 188360 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.692 188360 DEBUG nova.virt.libvirt.host [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.693 188360 DEBUG nova.virt.libvirt.host [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.693 188360 DEBUG nova.virt.libvirt.host [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.694 188360 DEBUG nova.virt.libvirt.host [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 22 08:15:09 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 08:15:09 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.778 188360 DEBUG nova.virt.libvirt.host [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6e8b8d2a30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.780 188360 DEBUG nova.virt.libvirt.host [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6e8b8d2a30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.781 188360 INFO nova.virt.libvirt.driver [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Connection event '1' reason 'None'
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.837 188360 WARNING nova.virt.libvirt.driver [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 08:15:09 compute-0 nova_compute[188356]: 2025-11-22 08:15:09.838 188360 DEBUG nova.virt.libvirt.volume.mount [None req-33f74462-d404-4599-a8fa-4436061a642b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 22 08:15:09 compute-0 python3.9[189148]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:15:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:15:09.943 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:15:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:15:09.944 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:15:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:15:09.944 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:15:09 compute-0 systemd[1]: Stopping nova_compute container...
Nov 22 08:15:10 compute-0 nova_compute[188356]: 2025-11-22 08:15:10.051 188360 DEBUG oslo_concurrency.lockutils [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:15:10 compute-0 nova_compute[188356]: 2025-11-22 08:15:10.052 188360 DEBUG oslo_concurrency.lockutils [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:15:10 compute-0 nova_compute[188356]: 2025-11-22 08:15:10.052 188360 DEBUG oslo_concurrency.lockutils [None req-dfc1896a-2ef4-4f3b-a99c-ccc7eb3bab77 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:15:11 compute-0 virtqemud[189170]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 08:15:11 compute-0 virtqemud[189170]: hostname: compute-0
Nov 22 08:15:11 compute-0 virtqemud[189170]: End of file while reading data: Input/output error
Nov 22 08:15:11 compute-0 systemd[1]: libpod-ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f.scope: Deactivated successfully.
Nov 22 08:15:11 compute-0 systemd[1]: libpod-ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f.scope: Consumed 3.483s CPU time.
Nov 22 08:15:11 compute-0 podman[189204]: 2025-11-22 08:15:11.341801722 +0000 UTC m=+1.338037538 container died ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f-userdata-shm.mount: Deactivated successfully.
Nov 22 08:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8-merged.mount: Deactivated successfully.
Nov 22 08:15:11 compute-0 podman[189204]: 2025-11-22 08:15:11.406931747 +0000 UTC m=+1.403167543 container cleanup ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Nov 22 08:15:11 compute-0 podman[189204]: nova_compute
Nov 22 08:15:11 compute-0 podman[189241]: nova_compute
Nov 22 08:15:11 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 22 08:15:11 compute-0 systemd[1]: Stopped nova_compute container.
Nov 22 08:15:11 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 08:15:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5daa781c1237bc67efcfd19f8916ed7a1e30b893d937b708b998b6240a35d9f8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:11 compute-0 podman[189254]: 2025-11-22 08:15:11.58492353 +0000 UTC m=+0.081957406 container init ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 22 08:15:11 compute-0 podman[189254]: 2025-11-22 08:15:11.596106464 +0000 UTC m=+0.093140320 container start ba7acdd6cf7f4bb0a614021d24964396e1d12923ed42e580d4b77adadaa1f30f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:15:11 compute-0 nova_compute[189268]: + sudo -E kolla_set_configs
Nov 22 08:15:11 compute-0 podman[189254]: nova_compute
Nov 22 08:15:11 compute-0 systemd[1]: Started nova_compute container.
Nov 22 08:15:11 compute-0 sudo[189146]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Validating config file
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying service configuration files
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /etc/ceph
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Creating directory /etc/ceph
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Writing out command to execute
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:11 compute-0 nova_compute[189268]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 08:15:11 compute-0 nova_compute[189268]: ++ cat /run_command
Nov 22 08:15:11 compute-0 nova_compute[189268]: + CMD=nova-compute
Nov 22 08:15:11 compute-0 nova_compute[189268]: + ARGS=
Nov 22 08:15:11 compute-0 nova_compute[189268]: + sudo kolla_copy_cacerts
Nov 22 08:15:11 compute-0 nova_compute[189268]: + [[ ! -n '' ]]
Nov 22 08:15:11 compute-0 nova_compute[189268]: + . kolla_extend_start
Nov 22 08:15:11 compute-0 nova_compute[189268]: Running command: 'nova-compute'
Nov 22 08:15:11 compute-0 nova_compute[189268]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 08:15:11 compute-0 nova_compute[189268]: + umask 0022
Nov 22 08:15:11 compute-0 nova_compute[189268]: + exec nova-compute
Nov 22 08:15:12 compute-0 sudo[189432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tysvlfjchoffzlsglepxzgzoziacwpsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799311.8209019-1546-121577665045406/AnsiballZ_podman_container.py'
Nov 22 08:15:12 compute-0 sudo[189432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:12 compute-0 python3.9[189434]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 08:15:12 compute-0 systemd[1]: Started libpod-conmon-bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043.scope.
Nov 22 08:15:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83ba54d65d9e5b288cb1284dc352a5f96cf7579b9a11b75ebe6b1fabfcc3fa8/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83ba54d65d9e5b288cb1284dc352a5f96cf7579b9a11b75ebe6b1fabfcc3fa8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83ba54d65d9e5b288cb1284dc352a5f96cf7579b9a11b75ebe6b1fabfcc3fa8/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 22 08:15:12 compute-0 podman[189460]: 2025-11-22 08:15:12.620086329 +0000 UTC m=+0.165838883 container init bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:15:12 compute-0 podman[189460]: 2025-11-22 08:15:12.629687721 +0000 UTC m=+0.175440245 container start bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 22 08:15:12 compute-0 python3.9[189434]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Applying nova statedir ownership
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 22 08:15:12 compute-0 nova_compute_init[189482]: INFO:nova_statedir:Nova statedir ownership complete
Nov 22 08:15:12 compute-0 systemd[1]: libpod-bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043.scope: Deactivated successfully.
Nov 22 08:15:12 compute-0 podman[189496]: 2025-11-22 08:15:12.757838524 +0000 UTC m=+0.035879389 container died bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:15:12 compute-0 sudo[189432]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043-userdata-shm.mount: Deactivated successfully.
Nov 22 08:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83ba54d65d9e5b288cb1284dc352a5f96cf7579b9a11b75ebe6b1fabfcc3fa8-merged.mount: Deactivated successfully.
Nov 22 08:15:12 compute-0 podman[189496]: 2025-11-22 08:15:12.882773829 +0000 UTC m=+0.160814674 container cleanup bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 22 08:15:12 compute-0 systemd[1]: libpod-conmon-bfc793705572866b8d6046d02b63f98674a2d5b137f35f2f96f89fc139370043.scope: Deactivated successfully.
Nov 22 08:15:13 compute-0 sshd-session[161247]: Connection closed by 192.168.122.30 port 54144
Nov 22 08:15:13 compute-0 sshd-session[161244]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:15:13 compute-0 systemd-logind[826]: Session 24 logged out. Waiting for processes to exit.
Nov 22 08:15:13 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Nov 22 08:15:13 compute-0 systemd[1]: session-24.scope: Consumed 1min 53.514s CPU time.
Nov 22 08:15:13 compute-0 systemd-logind[826]: Removed session 24.
Nov 22 08:15:13 compute-0 nova_compute[189268]: 2025-11-22 08:15:13.909 189273 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:15:13 compute-0 nova_compute[189268]: 2025-11-22 08:15:13.910 189273 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:15:13 compute-0 nova_compute[189268]: 2025-11-22 08:15:13.910 189273 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 08:15:13 compute-0 nova_compute[189268]: 2025-11-22 08:15:13.910 189273 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.061 189273 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.088 189273 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.089 189273 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 08:15:14 compute-0 podman[189548]: 2025-11-22 08:15:14.101615396 +0000 UTC m=+0.057372995 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 08:15:14 compute-0 podman[189549]: 2025-11-22 08:15:14.1036106 +0000 UTC m=+0.059462741 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.571 189273 INFO nova.virt.driver [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.670 189273 INFO nova.compute.provider_config [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.682 189273 DEBUG oslo_concurrency.lockutils [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.682 189273 DEBUG oslo_concurrency.lockutils [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.682 189273 DEBUG oslo_concurrency.lockutils [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.683 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.683 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.683 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.683 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.683 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.683 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.684 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.684 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.684 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.684 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.684 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.684 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.685 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.685 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.685 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.685 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.685 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.685 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.685 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.686 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.686 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.686 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.686 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.686 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.686 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.687 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.687 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.687 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.687 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.687 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.688 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.688 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.688 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.688 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.688 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.688 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.689 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.689 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.689 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.689 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.689 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.690 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.690 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.690 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.690 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.690 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.691 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.691 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.691 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.691 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.691 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.692 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.692 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.692 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.692 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.692 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.693 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.693 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.693 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.693 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.693 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.693 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.694 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.694 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.694 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.694 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.694 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.694 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.695 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.695 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.695 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.695 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.695 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.695 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.696 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.696 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.696 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.696 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.696 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.696 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.697 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.697 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.697 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.697 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.697 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.697 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.698 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.698 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.698 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.698 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.698 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.698 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.698 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.699 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.699 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.699 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.699 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.699 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.699 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.699 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.700 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.700 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.700 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.700 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.700 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.700 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.700 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.701 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.701 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.701 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.701 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.701 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.701 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.702 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.703 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.703 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.703 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.703 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.703 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.703 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.703 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.704 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.704 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.704 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.704 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.704 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.704 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.704 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.705 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.705 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.705 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.705 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.705 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.705 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.705 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.706 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.706 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.706 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.706 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.706 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.706 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.707 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.707 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.707 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.707 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.707 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.707 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.708 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.708 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.708 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.708 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.708 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.708 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.709 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.710 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.710 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.710 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.710 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.710 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.710 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.710 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.711 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.711 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.711 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.711 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.711 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.711 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.711 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.712 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.712 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.712 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.712 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.712 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.712 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.712 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.713 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.713 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.713 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.713 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.713 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.713 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.714 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.714 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.714 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.714 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.715 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.715 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.715 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.715 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.715 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.715 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.716 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.716 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.716 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.716 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.716 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.716 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.716 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.717 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.717 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.717 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.717 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.717 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.717 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.718 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.718 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.718 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.718 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.718 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.718 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.719 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.719 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.719 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.719 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.719 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.719 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.720 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.720 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.720 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.720 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.720 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.721 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.721 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.721 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.721 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.721 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.721 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.721 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.722 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.722 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.722 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.722 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.722 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.722 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.722 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.723 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.723 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.723 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.723 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.723 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.723 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.723 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.724 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.724 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.724 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.724 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.724 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.724 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.724 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.725 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.725 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.725 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.725 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.725 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.725 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.725 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.726 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.726 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.726 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.726 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.726 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.726 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.727 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.727 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.727 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.727 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.727 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.727 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.728 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.728 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.728 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.728 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.728 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.728 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.728 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.729 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.729 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.729 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.729 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.729 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.729 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.730 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.730 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.730 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.730 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.730 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.730 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.730 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.731 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.731 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.731 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.731 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.731 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.731 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.731 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.732 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.732 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.732 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.732 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.732 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.732 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.733 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.733 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.733 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.733 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.733 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.733 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.733 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.734 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.734 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.734 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.734 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.734 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.735 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.735 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.735 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.735 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.735 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.736 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.736 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.736 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.736 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.736 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.736 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.737 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.737 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.737 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.737 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.737 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.738 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.738 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.738 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.738 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.738 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.739 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.739 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.739 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.739 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.739 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.740 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.740 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.740 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.740 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.741 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.741 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.741 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.741 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.741 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.741 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.741 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.742 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.742 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.742 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.742 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.742 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.742 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.743 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.743 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.743 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.743 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.743 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.743 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.743 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.744 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.744 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.744 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.744 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.744 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.744 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.744 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.745 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.745 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.745 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.745 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.745 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.745 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.746 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.746 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.746 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.746 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.746 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.747 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.747 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.747 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.747 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.747 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.748 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.748 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.748 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.748 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.748 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.749 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.749 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.749 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.749 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.749 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.749 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.749 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.750 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.750 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.750 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.750 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.750 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.750 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.750 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.751 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.752 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.752 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.752 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.752 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.752 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.752 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.752 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.753 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.753 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.753 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.753 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.753 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.753 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.753 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.754 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.754 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.754 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.754 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.754 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.754 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.754 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.755 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.755 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.755 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.755 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.755 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.755 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.755 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.756 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.756 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.756 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.756 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.756 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.756 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.756 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.757 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.757 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.757 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.757 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.757 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.757 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.757 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.758 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.758 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.758 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.758 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.758 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.758 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.758 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.759 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.759 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.759 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.759 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.759 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.759 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.759 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.760 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.760 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.760 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.760 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.760 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.760 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.760 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.761 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.761 189273 WARNING oslo_config.cfg [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 08:15:14 compute-0 nova_compute[189268]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 08:15:14 compute-0 nova_compute[189268]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 08:15:14 compute-0 nova_compute[189268]: and ``live_migration_inbound_addr`` respectively.
Nov 22 08:15:14 compute-0 nova_compute[189268]: ).  Its value may be silently ignored in the future.
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.761 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.761 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.761 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.762 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.762 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.762 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.762 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.762 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.762 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.762 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.763 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.763 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.763 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.763 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.763 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.763 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.764 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.765 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.765 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.765 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.765 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.765 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.765 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.766 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.766 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.766 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.766 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.766 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.766 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.766 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.767 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.767 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.767 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.767 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.767 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.767 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.767 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.768 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.768 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.768 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.768 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.768 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.768 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.768 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.769 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.769 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.769 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.769 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.769 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.769 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.769 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.770 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.770 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.770 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.770 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.770 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.770 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.771 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.771 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.771 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.771 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.771 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.771 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.771 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.772 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.772 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.772 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.772 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.772 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.772 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.772 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.773 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.773 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.773 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.773 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.773 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.773 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.773 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.774 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.774 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.774 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.774 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.774 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.774 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.775 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.775 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.775 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.775 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.775 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.776 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.776 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.776 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.776 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.776 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.776 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.777 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.777 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.777 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.777 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.777 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.777 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.777 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.778 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.778 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.778 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.778 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.778 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.778 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.778 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.779 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.779 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.779 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.779 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.779 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.779 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.779 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.780 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.780 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.780 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.780 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.780 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.781 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.781 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.781 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.781 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.781 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.781 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.782 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.782 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.782 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.782 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.782 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.783 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.783 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.783 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.783 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.784 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.784 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.784 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.784 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.784 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.785 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.785 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.785 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.785 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.785 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.785 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.786 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.786 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.786 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.786 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.786 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.786 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.787 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.787 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.787 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.787 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.787 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.787 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.788 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.788 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.788 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.788 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.788 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.788 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.788 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.789 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.789 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.789 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.789 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.789 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.789 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.790 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.790 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.790 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.790 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.790 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.790 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.791 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.791 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.791 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.791 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.791 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.791 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.791 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.792 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.792 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.792 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.792 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.792 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.793 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.793 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.793 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.793 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.793 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.794 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.794 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.794 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.794 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.794 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.795 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.795 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.795 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.795 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.795 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.795 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.796 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.796 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.796 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.796 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.796 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.797 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.797 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.797 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.797 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.797 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.797 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.797 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.798 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.798 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.798 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.798 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.798 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.798 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.799 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.799 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.799 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.799 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.799 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.799 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.799 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.800 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.800 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.800 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.800 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.800 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.800 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.801 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.801 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.801 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.801 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.801 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.802 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.802 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.802 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.802 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.802 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.802 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.803 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.803 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.803 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.803 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.803 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.803 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.803 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.804 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.804 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.804 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.804 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.804 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.804 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.804 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.805 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.805 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.805 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.805 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.805 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.805 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.805 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.806 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.806 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.806 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.806 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.806 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.806 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.806 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.807 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.807 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.807 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.807 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.807 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.807 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.807 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.808 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.808 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.808 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.808 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.808 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.808 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.808 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.809 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.809 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.809 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.809 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.809 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.809 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.809 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.810 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.810 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.810 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.810 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.810 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.810 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.810 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.811 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.811 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.811 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.811 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.811 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.811 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.811 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.812 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.812 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.812 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.812 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.812 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.812 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.812 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.813 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.813 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.813 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.813 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.813 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.813 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.814 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.814 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.814 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.814 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.814 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.814 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.815 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.815 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.815 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.815 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.815 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.815 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.816 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.816 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.816 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.816 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.816 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.816 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.817 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.817 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.817 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.817 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.817 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.817 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.818 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.818 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.818 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.818 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.818 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.818 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.818 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.819 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.819 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.819 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.819 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.819 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.819 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.819 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.820 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.820 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.820 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.820 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.820 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.820 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.821 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.821 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.821 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.821 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.821 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.821 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.822 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.822 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.822 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.822 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.822 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.823 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.823 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.823 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.823 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.823 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.824 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.824 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.824 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.824 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.824 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.825 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.825 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.825 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.825 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.826 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.826 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.826 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.826 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.826 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.827 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.827 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.827 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.827 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.827 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.828 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.828 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.828 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.828 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.828 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.829 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.829 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.829 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.829 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.829 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.830 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.830 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.830 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.830 189273 DEBUG oslo_service.service [None req-2fe5a934-85f1-4eef-8095-987034a75901 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.831 189273 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.843 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.843 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.844 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.844 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.858 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2819bb43a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.861 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2819bb43a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.862 189273 INFO nova.virt.libvirt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Connection event '1' reason 'None'
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.870 189273 INFO nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 08:15:14 compute-0 nova_compute[189268]: 
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <host>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <uuid>11d569d2-d99e-416a-983e-bf082353d9ca</uuid>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <cpu>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <arch>x86_64</arch>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model>EPYC-Rome-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <vendor>AMD</vendor>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <microcode version='16777317'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <signature family='23' model='49' stepping='0'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='x2apic'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='tsc-deadline'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='osxsave'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='hypervisor'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='tsc_adjust'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='spec-ctrl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='stibp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='arch-capabilities'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='cmp_legacy'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='topoext'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='virt-ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='lbrv'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='tsc-scale'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='vmcb-clean'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='pause-filter'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='pfthreshold'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='svme-addr-chk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='rdctl-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='skip-l1dfl-vmentry'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='mds-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature name='pschange-mc-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <pages unit='KiB' size='4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <pages unit='KiB' size='2048'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <pages unit='KiB' size='1048576'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </cpu>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <power_management>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <suspend_mem/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <suspend_disk/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <suspend_hybrid/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </power_management>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <iommu support='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <migration_features>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <live/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <uri_transports>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <uri_transport>tcp</uri_transport>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <uri_transport>rdma</uri_transport>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </uri_transports>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </migration_features>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <topology>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <cells num='1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <cell id='0'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           <memory unit='KiB'>7864308</memory>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           <pages unit='KiB' size='4'>1966077</pages>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           <pages unit='KiB' size='2048'>0</pages>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           <distances>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <sibling id='0' value='10'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           </distances>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           <cpus num='8'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:           </cpus>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         </cell>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </cells>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </topology>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <cache>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </cache>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <secmodel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model>selinux</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <doi>0</doi>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </secmodel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <secmodel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model>dac</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <doi>0</doi>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </secmodel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </host>
Nov 22 08:15:14 compute-0 nova_compute[189268]: 
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <guest>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <os_type>hvm</os_type>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <arch name='i686'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <wordsize>32</wordsize>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <domain type='qemu'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <domain type='kvm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </arch>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <features>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <pae/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <nonpae/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <acpi default='on' toggle='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <apic default='on' toggle='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <cpuselection/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <deviceboot/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <disksnapshot default='on' toggle='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <externalSnapshot/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </features>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </guest>
Nov 22 08:15:14 compute-0 nova_compute[189268]: 
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <guest>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <os_type>hvm</os_type>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <arch name='x86_64'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <wordsize>64</wordsize>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <domain type='qemu'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <domain type='kvm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </arch>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <features>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <acpi default='on' toggle='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <apic default='on' toggle='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <cpuselection/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <deviceboot/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <disksnapshot default='on' toggle='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <externalSnapshot/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </features>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </guest>
Nov 22 08:15:14 compute-0 nova_compute[189268]: 
Nov 22 08:15:14 compute-0 nova_compute[189268]: </capabilities>
Nov 22 08:15:14 compute-0 nova_compute[189268]: 
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.881 189273 WARNING nova.virt.libvirt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.881 189273 DEBUG nova.virt.libvirt.volume.mount [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.887 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.910 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 08:15:14 compute-0 nova_compute[189268]: <domainCapabilities>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <domain>kvm</domain>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <arch>i686</arch>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <vcpu max='240'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <iothreads supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <os supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <enum name='firmware'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <loader supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>rom</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>pflash</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='readonly'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>yes</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='secure'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </loader>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </os>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <cpu>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='maximum' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='maximumMigratable'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='host-model' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <vendor>AMD</vendor>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='x2apic'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='stibp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='succor'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='ibrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='lbrv'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='custom' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cooperlake'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Dhyana-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10-128'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10-256'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10-512'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='KnightsMill'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='KnightsMill-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='SierraForest'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='SierraForest-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Snowridge'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='athlon'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='athlon-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='core2duo'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='core2duo-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='coreduo'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='coreduo-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='n270'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='n270-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='phenom'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='phenom-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <memoryBacking supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <enum name='sourceType'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <value>file</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <value>anonymous</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <value>memfd</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </memoryBacking>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <disk supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='diskDevice'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>disk</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>cdrom</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>floppy</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>lun</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>ide</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>fdc</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>sata</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <graphics supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>vnc</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>egl-headless</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </graphics>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <video supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='modelType'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>vga</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>cirrus</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>none</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>bochs</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>ramfb</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </video>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <hostdev supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='mode'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>subsystem</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='startupPolicy'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>mandatory</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>requisite</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>optional</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='subsysType'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>pci</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='capsType'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='pciBackend'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </hostdev>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <rng supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>random</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>egd</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <filesystem supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='driverType'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>path</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>handle</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>virtiofs</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </filesystem>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <tpm supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>tpm-tis</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>tpm-crb</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>emulator</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>external</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='backendVersion'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>2.0</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </tpm>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <redirdev supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </redirdev>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <channel supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </channel>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <crypto supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='model'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>qemu</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </crypto>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <interface supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='backendType'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>passt</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <panic supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>isa</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>hyperv</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </panic>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <console supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>null</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>vc</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>dev</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>file</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>pipe</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>stdio</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>udp</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>tcp</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>qemu-vdagent</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </console>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <features>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <gic supported='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <vmcoreinfo supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <genid supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <backingStoreInput supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <backup supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <async-teardown supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <ps2 supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <sev supported='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <sgx supported='no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <hyperv supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='features'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>relaxed</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>vapic</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>spinlocks</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>vpindex</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>runtime</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>synic</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>stimer</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>reset</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>vendor_id</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>frequencies</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>reenlightenment</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>tlbflush</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>ipi</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>avic</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>emsr_bitmap</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>xmm_input</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <defaults>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <spinlocks>4095</spinlocks>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <stimer_direct>on</stimer_direct>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </defaults>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </hyperv>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <launchSecurity supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='sectype'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>tdx</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </launchSecurity>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </features>
Nov 22 08:15:14 compute-0 nova_compute[189268]: </domainCapabilities>
Nov 22 08:15:14 compute-0 nova_compute[189268]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:15:14 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.917 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 08:15:14 compute-0 nova_compute[189268]: <domainCapabilities>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <domain>kvm</domain>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <arch>i686</arch>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <vcpu max='4096'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <iothreads supported='yes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <os supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <enum name='firmware'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <loader supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>rom</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>pflash</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='readonly'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>yes</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='secure'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </loader>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   </os>
Nov 22 08:15:14 compute-0 nova_compute[189268]:   <cpu>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='maximum' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <enum name='maximumMigratable'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='host-model' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <vendor>AMD</vendor>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='x2apic'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='stibp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='succor'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='ibrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='lbrv'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:14 compute-0 nova_compute[189268]:     <mode name='custom' supported='yes'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cooperlake'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Denverton-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Dhyana-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='EPYC-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10-128'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10-256'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx10-512'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Haswell-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v2'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='KnightsMill'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:15:14 compute-0 nova_compute[189268]:       <blockers model='KnightsMill-v1'>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:14 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SierraForest'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SierraForest-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='athlon'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='athlon-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='core2duo'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='core2duo-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='coreduo'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='coreduo-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='n270'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='n270-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='phenom'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='phenom-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <memoryBacking supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <enum name='sourceType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>file</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>anonymous</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>memfd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </memoryBacking>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <disk supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='diskDevice'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>disk</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>cdrom</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>floppy</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>lun</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>fdc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>sata</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <graphics supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vnc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>egl-headless</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </graphics>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <video supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='modelType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vga</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>cirrus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>none</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>bochs</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>ramfb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </video>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <hostdev supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='mode'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>subsystem</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='startupPolicy'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>mandatory</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>requisite</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>optional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='subsysType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pci</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='capsType'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='pciBackend'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </hostdev>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <rng supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>random</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>egd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <filesystem supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='driverType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>path</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>handle</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtiofs</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </filesystem>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <tpm supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tpm-tis</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tpm-crb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>emulator</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>external</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendVersion'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>2.0</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </tpm>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <redirdev supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </redirdev>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <channel supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </channel>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <crypto supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>qemu</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </crypto>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <interface supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>passt</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <panic supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>isa</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>hyperv</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </panic>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <console supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>null</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dev</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>file</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pipe</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>stdio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>udp</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tcp</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>qemu-vdagent</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </console>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <features>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <gic supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <vmcoreinfo supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <genid supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <backingStoreInput supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <backup supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <async-teardown supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <ps2 supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <sev supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <sgx supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <hyperv supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='features'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>relaxed</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vapic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>spinlocks</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vpindex</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>runtime</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>synic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>stimer</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>reset</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vendor_id</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>frequencies</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>reenlightenment</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tlbflush</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>ipi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>avic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>emsr_bitmap</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>xmm_input</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <defaults>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <spinlocks>4095</spinlocks>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <stimer_direct>on</stimer_direct>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </defaults>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </hyperv>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <launchSecurity supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='sectype'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tdx</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </launchSecurity>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </features>
Nov 22 08:15:15 compute-0 nova_compute[189268]: </domainCapabilities>
Nov 22 08:15:15 compute-0 nova_compute[189268]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.950 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:14.956 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 08:15:15 compute-0 nova_compute[189268]: <domainCapabilities>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <domain>kvm</domain>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <arch>x86_64</arch>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <vcpu max='240'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <iothreads supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <os supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <enum name='firmware'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <loader supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>rom</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pflash</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='readonly'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>yes</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='secure'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </loader>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </os>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <cpu>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='maximum' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='maximumMigratable'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='host-model' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <vendor>AMD</vendor>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='x2apic'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='stibp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='ssbd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='succor'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='ibrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='lbrv'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='custom' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cooperlake'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Dhyana-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10-128'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10-256'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10-512'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='KnightsMill'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='KnightsMill-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SierraForest'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SierraForest-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='athlon'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='athlon-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='core2duo'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='core2duo-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='coreduo'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='coreduo-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='n270'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='n270-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='phenom'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='phenom-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <memoryBacking supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <enum name='sourceType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>file</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>anonymous</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>memfd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </memoryBacking>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <disk supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='diskDevice'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>disk</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>cdrom</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>floppy</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>lun</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>ide</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>fdc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>sata</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <graphics supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vnc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>egl-headless</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </graphics>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <video supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='modelType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vga</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>cirrus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>none</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>bochs</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>ramfb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </video>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <hostdev supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='mode'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>subsystem</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='startupPolicy'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>mandatory</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>requisite</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>optional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='subsysType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pci</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='capsType'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='pciBackend'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </hostdev>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <rng supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>random</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>egd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <filesystem supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='driverType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>path</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>handle</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtiofs</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </filesystem>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <tpm supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tpm-tis</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tpm-crb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>emulator</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>external</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendVersion'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>2.0</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </tpm>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <redirdev supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </redirdev>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <channel supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </channel>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <crypto supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>qemu</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </crypto>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <interface supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>passt</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <panic supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>isa</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>hyperv</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </panic>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <console supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>null</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dev</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>file</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pipe</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>stdio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>udp</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tcp</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>qemu-vdagent</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </console>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <features>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <gic supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <vmcoreinfo supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <genid supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <backingStoreInput supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <backup supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <async-teardown supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <ps2 supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <sev supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <sgx supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <hyperv supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='features'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>relaxed</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vapic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>spinlocks</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vpindex</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>runtime</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>synic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>stimer</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>reset</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vendor_id</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>frequencies</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>reenlightenment</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tlbflush</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>ipi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>avic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>emsr_bitmap</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>xmm_input</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <defaults>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <spinlocks>4095</spinlocks>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <stimer_direct>on</stimer_direct>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </defaults>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </hyperv>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <launchSecurity supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='sectype'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tdx</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </launchSecurity>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </features>
Nov 22 08:15:15 compute-0 nova_compute[189268]: </domainCapabilities>
Nov 22 08:15:15 compute-0 nova_compute[189268]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.024 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 08:15:15 compute-0 nova_compute[189268]: <domainCapabilities>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <domain>kvm</domain>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <arch>x86_64</arch>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <vcpu max='4096'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <iothreads supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <os supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <enum name='firmware'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>efi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <loader supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>rom</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pflash</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='readonly'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>yes</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='secure'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>yes</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>no</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </loader>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </os>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <cpu>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='host-passthrough' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='hostPassthroughMigratable'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='maximum' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='maximumMigratable'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>on</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>off</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='host-model' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <vendor>AMD</vendor>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='x2apic'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='hypervisor'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='stibp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='ssbd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='overflow-recov'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='succor'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='ibrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='lbrv'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='tsc-scale'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='flushbyasid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='pause-filter'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='pfthreshold'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <feature policy='disable' name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <mode name='custom' supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Broadwell-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cooperlake'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Cooperlake-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Denverton-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Dhyana-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='auto-ibrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Milan-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amd-psfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='no-nested-data-bp'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='null-sel-clr-base'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='stibp-always-on'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-Rome-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='EPYC-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='GraniteRapids-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10-128'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10-256'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx10-512'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='prefetchiti'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Haswell-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v6'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Icelake-Server-v7'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='IvyBridge-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='KnightsMill'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='KnightsMill-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4fmaps'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-4vnniw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512er'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512pf'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G4-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Opteron_G5-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fma4'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tbm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xop'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SapphireRapids-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='amx-tile'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-bf16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-fp16'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512-vpopcntdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bitalg'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vbmi2'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrc'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fzrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='la57'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='taa-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='tsx-ldtrk'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xfd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SierraForest'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='SierraForest-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ifma'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-ne-convert'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx-vnni-int8'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='bus-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cmpccxadd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fbsdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='fsrs'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ibrs-all'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mcdt-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pbrsb-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='psdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='sbdr-ssdp-no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='serialize'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vaes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='vpclmulqdq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Client-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='hle'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='rtm'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Skylake-Server-v5'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512bw'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512cd'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512dq'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512f'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='avx512vl'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='invpcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pcid'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='pku'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='mpx'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v2'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v3'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='core-capability'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='split-lock-detect'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='Snowridge-v4'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='cldemote'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='erms'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='gfni'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdir64b'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='movdiri'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='xsaves'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='athlon'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='athlon-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='core2duo'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='core2duo-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='coreduo'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='coreduo-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='n270'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='n270-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='ss'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='phenom'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <blockers model='phenom-v1'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnow'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <feature name='3dnowext'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </blockers>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </mode>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <memoryBacking supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <enum name='sourceType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>file</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>anonymous</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <value>memfd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </memoryBacking>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <disk supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='diskDevice'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>disk</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>cdrom</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>floppy</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>lun</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>fdc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>sata</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <graphics supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vnc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>egl-headless</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </graphics>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <video supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='modelType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vga</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>cirrus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>none</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>bochs</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>ramfb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </video>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <hostdev supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='mode'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>subsystem</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='startupPolicy'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>mandatory</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>requisite</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>optional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='subsysType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pci</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>scsi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='capsType'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='pciBackend'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </hostdev>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <rng supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtio-non-transitional</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>random</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>egd</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <filesystem supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='driverType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>path</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>handle</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>virtiofs</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </filesystem>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <tpm supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tpm-tis</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tpm-crb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>emulator</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>external</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendVersion'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>2.0</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </tpm>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <redirdev supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='bus'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>usb</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </redirdev>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <channel supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </channel>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <crypto supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>qemu</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendModel'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>builtin</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </crypto>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <interface supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='backendType'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>default</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>passt</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <panic supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='model'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>isa</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>hyperv</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </panic>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <console supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='type'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>null</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vc</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pty</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dev</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>file</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>pipe</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>stdio</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>udp</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tcp</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>unix</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>qemu-vdagent</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>dbus</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </console>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   <features>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <gic supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <vmcoreinfo supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <genid supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <backingStoreInput supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <backup supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <async-teardown supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <ps2 supported='yes'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <sev supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <sgx supported='no'/>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <hyperv supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='features'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>relaxed</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vapic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>spinlocks</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vpindex</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>runtime</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>synic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>stimer</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>reset</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>vendor_id</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>frequencies</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>reenlightenment</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tlbflush</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>ipi</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>avic</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>emsr_bitmap</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>xmm_input</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <defaults>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <spinlocks>4095</spinlocks>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <stimer_direct>on</stimer_direct>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </defaults>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </hyperv>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     <launchSecurity supported='yes'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       <enum name='sectype'>
Nov 22 08:15:15 compute-0 nova_compute[189268]:         <value>tdx</value>
Nov 22 08:15:15 compute-0 nova_compute[189268]:       </enum>
Nov 22 08:15:15 compute-0 nova_compute[189268]:     </launchSecurity>
Nov 22 08:15:15 compute-0 nova_compute[189268]:   </features>
Nov 22 08:15:15 compute-0 nova_compute[189268]: </domainCapabilities>
Nov 22 08:15:15 compute-0 nova_compute[189268]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.096 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.096 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.096 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.096 189273 INFO nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Secure Boot support detected
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.098 189273 INFO nova.virt.libvirt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.098 189273 INFO nova.virt.libvirt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.108 189273 DEBUG nova.virt.libvirt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.158 189273 INFO nova.virt.node [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Determined node identity 699bf240-9d16-48c7-bff5-24c8bb8aac19 from /var/lib/nova/compute_id
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.178 189273 WARNING nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Compute nodes ['699bf240-9d16-48c7-bff5-24c8bb8aac19'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.214 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.271 189273 WARNING nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.272 189273 DEBUG oslo_concurrency.lockutils [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.272 189273 DEBUG oslo_concurrency.lockutils [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.272 189273 DEBUG oslo_concurrency.lockutils [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.272 189273 DEBUG nova.compute.resource_tracker [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:15:15 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 08:15:15 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.576 189273 WARNING nova.virt.libvirt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.578 189273 DEBUG nova.compute.resource_tracker [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6101MB free_disk=72.72987747192383GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.578 189273 DEBUG oslo_concurrency.lockutils [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.578 189273 DEBUG oslo_concurrency.lockutils [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.591 189273 WARNING nova.compute.resource_tracker [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] No compute node record for compute-0.ctlplane.example.com:699bf240-9d16-48c7-bff5-24c8bb8aac19: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 699bf240-9d16-48c7-bff5-24c8bb8aac19 could not be found.
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.617 189273 INFO nova.compute.resource_tracker [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 699bf240-9d16-48c7-bff5-24c8bb8aac19
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.685 189273 DEBUG nova.compute.resource_tracker [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:15:15 compute-0 nova_compute[189268]: 2025-11-22 08:15:15.685 189273 DEBUG nova.compute.resource_tracker [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:15:16 compute-0 nova_compute[189268]: 2025-11-22 08:15:16.719 189273 INFO nova.scheduler.client.report [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [req-8739b1ec-7f22-401a-9f87-bf92f3920adc] Created resource provider record via placement API for resource provider with UUID 699bf240-9d16-48c7-bff5-24c8bb8aac19 and name compute-0.ctlplane.example.com.
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.138 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 22 08:15:17 compute-0 nova_compute[189268]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.139 189273 INFO nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] kernel doesn't support AMD SEV
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.139 189273 DEBUG nova.compute.provider_tree [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.140 189273 DEBUG nova.virt.libvirt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.197 189273 DEBUG nova.scheduler.client.report [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Updated inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.197 189273 DEBUG nova.compute.provider_tree [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Updating resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.197 189273 DEBUG nova.compute.provider_tree [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.282 189273 DEBUG nova.compute.provider_tree [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Updating resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.305 189273 DEBUG nova.compute.resource_tracker [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.306 189273 DEBUG oslo_concurrency.lockutils [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.306 189273 DEBUG nova.service [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.348 189273 DEBUG nova.service [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.349 189273 DEBUG nova.servicegroup.drivers.db [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.350 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:15:17 compute-0 nova_compute[189268]: 2025-11-22 08:15:17.366 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:15:18 compute-0 sshd-session[189635]: Accepted publickey for zuul from 192.168.122.30 port 58694 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:15:18 compute-0 systemd-logind[826]: New session 26 of user zuul.
Nov 22 08:15:18 compute-0 systemd[1]: Started Session 26 of User zuul.
Nov 22 08:15:18 compute-0 sshd-session[189635]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:15:19 compute-0 python3.9[189788]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:15:20 compute-0 sudo[189942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcouqgauhyfphdrjoevdfbxtlyozynqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799320.3139608-36-80069359261855/AnsiballZ_systemd_service.py'
Nov 22 08:15:20 compute-0 sudo[189942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:21 compute-0 python3.9[189944]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:15:21 compute-0 systemd[1]: Reloading.
Nov 22 08:15:21 compute-0 systemd-rc-local-generator[189970]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:15:21 compute-0 systemd-sysv-generator[189973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:15:21 compute-0 sudo[189942]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:22 compute-0 python3.9[190129]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:15:22 compute-0 network[190146]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:15:22 compute-0 network[190147]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:15:22 compute-0 network[190148]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:15:23 compute-0 podman[190154]: 2025-11-22 08:15:23.563993576 +0000 UTC m=+0.115352635 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 22 08:15:27 compute-0 sudo[190444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvymrvdatiiqowzruyyryfrtsryhcrdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799327.048126-55-221317930534049/AnsiballZ_systemd_service.py'
Nov 22 08:15:27 compute-0 sudo[190444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:27 compute-0 python3.9[190446]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:15:27 compute-0 sudo[190444]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:28 compute-0 sudo[190597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdfapmweejeyenmsgdpqafbcxpohvlvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799327.9382687-65-171981471792297/AnsiballZ_file.py'
Nov 22 08:15:28 compute-0 sudo[190597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:28 compute-0 python3.9[190599]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:28 compute-0 sudo[190597]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:28 compute-0 rsyslogd[1013]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:15:29 compute-0 sudo[190750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaizjloueedvgwkbqmugjbcnmyelzsxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799328.991825-73-48535478711350/AnsiballZ_file.py'
Nov 22 08:15:29 compute-0 sudo[190750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:29 compute-0 python3.9[190752]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:29 compute-0 sudo[190750]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:30 compute-0 sudo[190902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvjmmrjnkgyfjpestprrzasnnrismqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799329.7448385-82-97042270817860/AnsiballZ_command.py'
Nov 22 08:15:30 compute-0 sudo[190902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:30 compute-0 python3.9[190904]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:15:30 compute-0 sudo[190902]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:31 compute-0 python3.9[191056]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:15:31 compute-0 sudo[191206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhwaxwtssgghgvgcqvizbuinzytaayd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799331.4673812-100-133330961403765/AnsiballZ_systemd_service.py'
Nov 22 08:15:31 compute-0 sudo[191206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:32 compute-0 python3.9[191208]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:15:32 compute-0 systemd[1]: Reloading.
Nov 22 08:15:32 compute-0 systemd-sysv-generator[191239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:15:32 compute-0 systemd-rc-local-generator[191236]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:15:32 compute-0 sudo[191206]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:32 compute-0 sudo[191393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txzakrafrlehpbdolpzwkjvsifeswwjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799332.6162634-108-66886802478353/AnsiballZ_command.py'
Nov 22 08:15:32 compute-0 sudo[191393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:33 compute-0 python3.9[191395]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:15:33 compute-0 sudo[191393]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:33 compute-0 sudo[191546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adgyuzzxrsbchdfzoqtvfjdatqpnuqmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799333.381543-117-119596500544227/AnsiballZ_file.py'
Nov 22 08:15:33 compute-0 sudo[191546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:33 compute-0 python3.9[191548]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:15:33 compute-0 sudo[191546]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:34 compute-0 python3.9[191698]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:15:35 compute-0 python3.9[191850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:36 compute-0 python3.9[191971]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799334.9313078-133-254837420863537/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:15:36 compute-0 sudo[192121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdstcncdhtgoptfviybcqyxynarhatc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799336.343473-148-180791495035880/AnsiballZ_group.py'
Nov 22 08:15:36 compute-0 sudo[192121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:37 compute-0 python3.9[192123]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 22 08:15:37 compute-0 sudo[192121]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:37 compute-0 sudo[192273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbzqkwugksfocfmxryenipnqeofqqacr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799337.4127016-159-68265036165406/AnsiballZ_getent.py'
Nov 22 08:15:37 compute-0 sudo[192273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:38 compute-0 python3.9[192275]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 22 08:15:38 compute-0 sudo[192273]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:38 compute-0 sudo[192426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwfsjggkgxfgzmuxwjsxigijcdxrrvfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799338.235256-167-8134819695857/AnsiballZ_group.py'
Nov 22 08:15:38 compute-0 sudo[192426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:38 compute-0 python3.9[192428]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 08:15:38 compute-0 groupadd[192429]: group added to /etc/group: name=ceilometer, GID=42405
Nov 22 08:15:38 compute-0 groupadd[192429]: group added to /etc/gshadow: name=ceilometer
Nov 22 08:15:38 compute-0 groupadd[192429]: new group: name=ceilometer, GID=42405
Nov 22 08:15:38 compute-0 sudo[192426]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:39 compute-0 sudo[192584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynsviwggucdkyqschaftzrwwtovxrojh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799338.900129-175-156272984974383/AnsiballZ_user.py'
Nov 22 08:15:39 compute-0 sudo[192584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:15:39 compute-0 python3.9[192586]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 08:15:39 compute-0 useradd[192588]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 08:15:39 compute-0 useradd[192588]: add 'ceilometer' to group 'libvirt'
Nov 22 08:15:39 compute-0 useradd[192588]: add 'ceilometer' to shadow group 'libvirt'
Nov 22 08:15:39 compute-0 sudo[192584]: pam_unix(sudo:session): session closed for user root
Nov 22 08:15:40 compute-0 python3.9[192744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:41 compute-0 python3.9[192865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799340.357916-201-257550020272950/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:42 compute-0 python3.9[193015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:42 compute-0 python3.9[193136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799341.6574209-201-249479121836508/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:43 compute-0 python3.9[193286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:43 compute-0 python3.9[193407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799342.8197272-201-34364784483306/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:44 compute-0 podman[193531]: 2025-11-22 08:15:44.320754399 +0000 UTC m=+0.072543311 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:15:44 compute-0 podman[193532]: 2025-11-22 08:15:44.323237179 +0000 UTC m=+0.069280310 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 08:15:44 compute-0 python3.9[193582]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:15:45 compute-0 python3.9[193748]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:15:45 compute-0 python3.9[193900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:46 compute-0 python3.9[194021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799345.3560438-260-249825361058034/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:46 compute-0 python3.9[194171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:47 compute-0 python3.9[194247]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:47 compute-0 python3.9[194397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:48 compute-0 python3.9[194518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799347.441887-260-213605603589141/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:49 compute-0 python3.9[194668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:49 compute-0 python3.9[194789]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799348.5933352-260-132333212602275/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:50 compute-0 python3.9[194939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:50 compute-0 python3.9[195060]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799349.9720855-260-279278962103834/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:52 compute-0 python3.9[195210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:52 compute-0 python3.9[195331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799351.1446466-260-148444575141390/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:53 compute-0 python3.9[195481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:54 compute-0 podman[195529]: 2025-11-22 08:15:54.157498419 +0000 UTC m=+0.102619062 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:15:54 compute-0 python3.9[195628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799352.8849475-260-165285886666985/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:55 compute-0 python3.9[195778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:55 compute-0 python3.9[195899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799354.6524444-260-174885973829625/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:56 compute-0 python3.9[196049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:56 compute-0 python3.9[196170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799355.8591692-260-115240338016456/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:57 compute-0 python3.9[196320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:58 compute-0 python3.9[196441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799356.9948344-260-216564850661684/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:58 compute-0 python3.9[196591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:15:59 compute-0 python3.9[196712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799358.2091532-260-148453606273714/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:15:59 compute-0 python3.9[196862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:00 compute-0 python3.9[196938]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:01 compute-0 python3.9[197088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:01 compute-0 python3.9[197164]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:02 compute-0 python3.9[197314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:02 compute-0 python3.9[197390]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:03 compute-0 sudo[197540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhpysftellbydebytdpjlqdyojtefmrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799363.113902-449-253302014020759/AnsiballZ_file.py'
Nov 22 08:16:03 compute-0 sudo[197540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:03 compute-0 python3.9[197542]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:03 compute-0 sudo[197540]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:04 compute-0 sudo[197692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkcaghwdyonsfgfjrcmtdlppbamubmcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799363.7841597-457-214982017702946/AnsiballZ_file.py'
Nov 22 08:16:04 compute-0 sudo[197692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:04 compute-0 python3.9[197694]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:04 compute-0 sudo[197692]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:04 compute-0 sudo[197844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjxwisstisbfgupoceofhlhwsojnhpiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799364.455787-465-60735027218180/AnsiballZ_file.py'
Nov 22 08:16:04 compute-0 sudo[197844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:05 compute-0 python3.9[197846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:16:05 compute-0 sudo[197844]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:05 compute-0 sudo[197996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szmokzhwisaimiicrzveyubayjlumywm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799365.3055608-473-144228584494912/AnsiballZ_systemd_service.py'
Nov 22 08:16:05 compute-0 sudo[197996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:05 compute-0 python3.9[197998]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:16:06 compute-0 systemd[1]: Reloading.
Nov 22 08:16:06 compute-0 systemd-rc-local-generator[198029]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:06 compute-0 systemd-sysv-generator[198032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:06 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 22 08:16:06 compute-0 sudo[197996]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:07 compute-0 sudo[198187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzjxpdcecmygticlbksvgpziqmskajfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799366.7965064-482-210437471583451/AnsiballZ_stat.py'
Nov 22 08:16:07 compute-0 sudo[198187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:07 compute-0 python3.9[198189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:07 compute-0 sudo[198187]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:07 compute-0 sudo[198310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jktfnkwnbymsrhdksmptrqwiutgzcbcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799366.7965064-482-210437471583451/AnsiballZ_copy.py'
Nov 22 08:16:07 compute-0 sudo[198310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:07 compute-0 python3.9[198312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799366.7965064-482-210437471583451/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:16:07 compute-0 sudo[198310]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:08 compute-0 sudo[198386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrpsgdkzvhcoxupigbhgahywzvlkxhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799366.7965064-482-210437471583451/AnsiballZ_stat.py'
Nov 22 08:16:08 compute-0 sudo[198386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:08 compute-0 python3.9[198388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:08 compute-0 sudo[198386]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:08 compute-0 sudo[198509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnudebxzwmbohsyikbmjrbjmnchcgkhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799366.7965064-482-210437471583451/AnsiballZ_copy.py'
Nov 22 08:16:08 compute-0 sudo[198509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:08 compute-0 python3.9[198511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799366.7965064-482-210437471583451/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:16:08 compute-0 sudo[198509]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:09 compute-0 sudo[198661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjvknueedasqwkaczgwgenoygfwbxvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799369.2596426-510-196355073029852/AnsiballZ_container_config_data.py'
Nov 22 08:16:09 compute-0 sudo[198661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:16:09.944 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:16:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:16:09.946 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:16:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:16:09.946 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:16:09 compute-0 python3.9[198663]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 22 08:16:09 compute-0 sudo[198661]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:10 compute-0 sudo[198813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgusupctyezmuovhnugcpjuqtfybbpan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799370.2063296-519-263091602915879/AnsiballZ_container_config_hash.py'
Nov 22 08:16:10 compute-0 sudo[198813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:10 compute-0 python3.9[198815]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:16:10 compute-0 sudo[198813]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:11 compute-0 sudo[198965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyzpmzifxozrtkubrjgizzdbzadotiuo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799371.1988704-529-39492212475474/AnsiballZ_edpm_container_manage.py'
Nov 22 08:16:11 compute-0 sudo[198965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:12 compute-0 python3[198967]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:16:12 compute-0 podman[199005]: 2025-11-22 08:16:12.269358365 +0000 UTC m=+0.070392010 container create c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true)
Nov 22 08:16:12 compute-0 podman[199005]: 2025-11-22 08:16:12.228309197 +0000 UTC m=+0.029342882 image pull 9bdd8ae00d8946a2ce2c9113b1770ecde661cc666ba6fcde2c074d087d635114 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 22 08:16:12 compute-0 python3[198967]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Nov 22 08:16:12 compute-0 sudo[198965]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:12 compute-0 sudo[199193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycjjdnmqdqgfwwxhzuphocinfwajrhab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799372.5846136-537-72174770538093/AnsiballZ_stat.py'
Nov 22 08:16:12 compute-0 sudo[199193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:13 compute-0 python3.9[199195]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:16:13 compute-0 sudo[199193]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.102 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.102 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.112 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.113 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.113 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.114 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.114 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.114 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.115 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.115 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.115 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.136 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.137 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.137 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.137 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.307 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.308 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6080MB free_disk=72.72978973388672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.308 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.308 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.361 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.361 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.380 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.391 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.393 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:16:14 compute-0 nova_compute[189268]: 2025-11-22 08:16:14.393 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:16:14 compute-0 sudo[199371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syciopkvumwkhbjlymsiqkbnwkwubkrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799374.2978342-546-207768954410534/AnsiballZ_file.py'
Nov 22 08:16:14 compute-0 sudo[199371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:14 compute-0 podman[199322]: 2025-11-22 08:16:14.589814495 +0000 UTC m=+0.072127909 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:16:14 compute-0 podman[199321]: 2025-11-22 08:16:14.621596035 +0000 UTC m=+0.101176672 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 08:16:14 compute-0 python3.9[199386]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:14 compute-0 sudo[199371]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:15 compute-0 sudo[199535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fleacrnpbagxlcjuhknqsktfgkobmtwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799374.8950088-546-355479442657/AnsiballZ_copy.py'
Nov 22 08:16:15 compute-0 sudo[199535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:15 compute-0 python3.9[199537]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799374.8950088-546-355479442657/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:15 compute-0 sudo[199535]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:16 compute-0 sudo[199611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgunzdcmygdyrhzptmgchzfcpzyjmfwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799374.8950088-546-355479442657/AnsiballZ_systemd.py'
Nov 22 08:16:16 compute-0 sudo[199611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:17 compute-0 python3.9[199613]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:16:17 compute-0 systemd[1]: Reloading.
Nov 22 08:16:17 compute-0 systemd-rc-local-generator[199636]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:17 compute-0 systemd-sysv-generator[199639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:17 compute-0 sudo[199611]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:17 compute-0 sudo[199722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwitzmzspoglwdkgnaqnriuzemicuuxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799374.8950088-546-355479442657/AnsiballZ_systemd.py'
Nov 22 08:16:17 compute-0 sudo[199722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:17 compute-0 python3.9[199724]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:16:18 compute-0 systemd[1]: Reloading.
Nov 22 08:16:18 compute-0 systemd-rc-local-generator[199753]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:18 compute-0 systemd-sysv-generator[199757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:18 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Nov 22 08:16:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.
Nov 22 08:16:18 compute-0 podman[199764]: 2025-11-22 08:16:18.717075944 +0000 UTC m=+0.327771522 container init c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + sudo -E kolla_set_configs
Nov 22 08:16:18 compute-0 sudo[199785]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: sudo: unable to send audit message: Operation not permitted
Nov 22 08:16:18 compute-0 sudo[199785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:16:18 compute-0 podman[199764]: 2025-11-22 08:16:18.754546823 +0000 UTC m=+0.365242371 container start c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, org.label-schema.vendor=CentOS)
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Validating config file
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Copying service configuration files
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: INFO:__main__:Writing out command to execute
Nov 22 08:16:18 compute-0 sudo[199785]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: ++ cat /run_command
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + ARGS=
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + sudo kolla_copy_cacerts
Nov 22 08:16:18 compute-0 sudo[199801]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: sudo: unable to send audit message: Operation not permitted
Nov 22 08:16:18 compute-0 sudo[199801]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:16:18 compute-0 sudo[199801]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + [[ ! -n '' ]]
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + . kolla_extend_start
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + umask 0022
Nov 22 08:16:18 compute-0 ceilometer_agent_compute[199779]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 22 08:16:18 compute-0 podman[199764]: ceilometer_agent_compute
Nov 22 08:16:18 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Nov 22 08:16:18 compute-0 sudo[199722]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:19 compute-0 podman[199786]: 2025-11-22 08:16:19.015971848 +0000 UTC m=+0.245871761 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Nov 22 08:16:19 compute-0 systemd[1]: c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-39c5d8279132cdf5.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:16:19 compute-0 systemd[1]: c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-39c5d8279132cdf5.service: Failed with result 'exit-code'.
Nov 22 08:16:19 compute-0 sudo[199959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdqkqnphdkpfqaitiqancveagmqspbfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799379.1588874-570-215914385379492/AnsiballZ_systemd.py'
Nov 22 08:16:19 compute-0 sudo[199959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:19 compute-0 python3.9[199961]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.813 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.813 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.813 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.813 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.814 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.815 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.816 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.817 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.818 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.819 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.820 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.821 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.822 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.823 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.824 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.825 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.826 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.827 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.847 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.848 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.848 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.848 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.848 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.848 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.849 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.850 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.851 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.852 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.853 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.854 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.855 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.856 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.857 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.858 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.859 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.860 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.862 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.864 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.865 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.874 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.975 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.976 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 22 08:16:19 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:19.976 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.048 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.057 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.057 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.057 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.182 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.183 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.184 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.185 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.186 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.187 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.188 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.189 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.190 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.191 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.192 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.193 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.194 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.195 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.196 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.197 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.198 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.199 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[199779]: 2025-11-22 08:16:20.208 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 22 08:16:20 compute-0 virtqemud[189170]: End of file while reading data: Input/output error
Nov 22 08:16:20 compute-0 systemd[1]: libpod-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope: Deactivated successfully.
Nov 22 08:16:20 compute-0 conmon[199779]: conmon c75207e5ade1c7391ebc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope/container/memory.events
Nov 22 08:16:20 compute-0 systemd[1]: libpod-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope: Consumed 1.757s CPU time.
Nov 22 08:16:20 compute-0 podman[199965]: 2025-11-22 08:16:20.470867679 +0000 UTC m=+0.634269229 container died c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 22 08:16:20 compute-0 systemd[1]: c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-39c5d8279132cdf5.timer: Deactivated successfully.
Nov 22 08:16:20 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.
Nov 22 08:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-userdata-shm.mount: Deactivated successfully.
Nov 22 08:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154-merged.mount: Deactivated successfully.
Nov 22 08:16:20 compute-0 podman[199965]: 2025-11-22 08:16:20.642937493 +0000 UTC m=+0.806339053 container cleanup c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 22 08:16:20 compute-0 podman[199965]: ceilometer_agent_compute
Nov 22 08:16:20 compute-0 podman[200004]: ceilometer_agent_compute
Nov 22 08:16:20 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 22 08:16:20 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 22 08:16:20 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Nov 22 08:16:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9237d6156dbd75a437062f752ea8ea848dcfe295860ac9a2f8e24b82ef4154/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.
Nov 22 08:16:20 compute-0 podman[200017]: 2025-11-22 08:16:20.894967796 +0000 UTC m=+0.162096797 container init c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: + sudo -E kolla_set_configs
Nov 22 08:16:20 compute-0 sudo[200035]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: sudo: unable to send audit message: Operation not permitted
Nov 22 08:16:20 compute-0 sudo[200035]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:16:20 compute-0 podman[200017]: 2025-11-22 08:16:20.932556007 +0000 UTC m=+0.199684908 container start c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:16:20 compute-0 podman[200017]: ceilometer_agent_compute
Nov 22 08:16:20 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Validating config file
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying service configuration files
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: INFO:__main__:Writing out command to execute
Nov 22 08:16:20 compute-0 sudo[200035]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: ++ cat /run_command
Nov 22 08:16:20 compute-0 sudo[199959]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: + ARGS=
Nov 22 08:16:20 compute-0 ceilometer_agent_compute[200029]: + sudo kolla_copy_cacerts
Nov 22 08:16:21 compute-0 sudo[200057]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: sudo: unable to send audit message: Operation not permitted
Nov 22 08:16:21 compute-0 sudo[200057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:16:21 compute-0 podman[200039]: 2025-11-22 08:16:21.004229733 +0000 UTC m=+0.062151060 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:16:21 compute-0 sudo[200057]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: + [[ ! -n '' ]]
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: + . kolla_extend_start
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: + umask 0022
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 22 08:16:21 compute-0 systemd[1]: c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-fad5c9c3264056a.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:16:21 compute-0 systemd[1]: c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-fad5c9c3264056a.service: Failed with result 'exit-code'.
Nov 22 08:16:21 compute-0 sudo[200213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucpgexlmepeyobzzodyxevcynklteuio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799381.1331575-578-15221450899950/AnsiballZ_stat.py'
Nov 22 08:16:21 compute-0 sudo[200213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:21 compute-0 python3.9[200215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:21 compute-0 sudo[200213]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.818 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.818 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.818 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.818 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.818 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.818 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.818 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.819 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.820 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.821 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.822 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.823 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.824 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.825 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.829 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.830 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.831 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.851 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.851 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.851 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.852 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.853 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.854 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.855 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.856 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.857 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.858 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.859 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.860 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.861 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.861 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.862 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.864 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.864 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.888 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.899 15 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.900 15 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 22 08:16:21 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:21.900 15 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 22 08:16:22 compute-0 sudo[200344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oauxevcvgqsxaftohbqmgeotnqcxvmgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799381.1331575-578-15221450899950/AnsiballZ_copy.py'
Nov 22 08:16:22 compute-0 sudo[200344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.057 15 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.058 15 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.059 15 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.060 15 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.061 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.062 15 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.063 15 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.064 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.065 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.066 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.067 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.068 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.069 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.070 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.071 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.071 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.071 15 DEBUG cotyledon._service [-] Run service AgentManager(0) [15] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.073 15 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.085 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.086 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.086 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.086 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.086 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.087 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.087 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.087 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.087 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.087 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.087 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.091 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.092 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.092 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808a04d0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.105 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.105 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.105 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.105 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.105 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.105 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.106 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.106 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.106 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.106 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.106 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.106 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:16:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:16:22 compute-0 python3.9[200346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799381.1331575-578-15221450899950/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:16:22 compute-0 sudo[200344]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:22 compute-0 sudo[200501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfssyehfylwlszrjdozffvkqtfpnsodx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799382.5577986-595-117052038959885/AnsiballZ_container_config_data.py'
Nov 22 08:16:22 compute-0 sudo[200501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:23 compute-0 python3.9[200503]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 22 08:16:23 compute-0 sudo[200501]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:23 compute-0 sudo[200653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqtmepefnflupedlgstxiiwbgofapvqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799383.267386-604-199940568890870/AnsiballZ_container_config_hash.py'
Nov 22 08:16:23 compute-0 sudo[200653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:23 compute-0 python3.9[200655]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:16:23 compute-0 sudo[200653]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:24 compute-0 sudo[200818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upspqxdbanqejcnjtbdxccrvzirwbvhb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799384.074548-614-267997607937308/AnsiballZ_edpm_container_manage.py'
Nov 22 08:16:24 compute-0 sudo[200818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:24 compute-0 podman[200779]: 2025-11-22 08:16:24.382308388 +0000 UTC m=+0.086345797 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:16:24 compute-0 python3[200826]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:16:24 compute-0 podman[200871]: 2025-11-22 08:16:24.900779216 +0000 UTC m=+0.089206897 container create 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:16:24 compute-0 podman[200871]: 2025-11-22 08:16:24.833315368 +0000 UTC m=+0.021743059 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 22 08:16:24 compute-0 python3[200826]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 22 08:16:25 compute-0 sudo[200818]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:25 compute-0 sudo[201060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkphtljruwqgwetknjimhgzgbupicrqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799385.2072275-622-259406986946713/AnsiballZ_stat.py'
Nov 22 08:16:25 compute-0 sudo[201060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:25 compute-0 python3.9[201062]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:16:25 compute-0 sudo[201060]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:26 compute-0 sudo[201214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llywgwnljhjegsdetqxohtepyhsinhus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799386.708019-631-49040486766876/AnsiballZ_file.py'
Nov 22 08:16:26 compute-0 sudo[201214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:27 compute-0 python3.9[201216]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:27 compute-0 sudo[201214]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:27 compute-0 sudo[201365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsjxnipelqtdimrdiswuzhmgwdzqamvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799387.2382689-631-266196180869471/AnsiballZ_copy.py'
Nov 22 08:16:27 compute-0 sudo[201365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:27 compute-0 python3.9[201367]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799387.2382689-631-266196180869471/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:27 compute-0 sudo[201365]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:28 compute-0 sudo[201441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abfgymcxcpmkyfuedbdpyioniripacoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799387.2382689-631-266196180869471/AnsiballZ_systemd.py'
Nov 22 08:16:28 compute-0 sudo[201441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:29 compute-0 python3.9[201443]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:16:29 compute-0 systemd[1]: Reloading.
Nov 22 08:16:29 compute-0 systemd-sysv-generator[201471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:29 compute-0 systemd-rc-local-generator[201466]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:29 compute-0 sudo[201441]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:29 compute-0 sudo[201552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdupirsgnymtlnzegezkviaeiigvpkzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799387.2382689-631-266196180869471/AnsiballZ_systemd.py'
Nov 22 08:16:29 compute-0 sudo[201552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:30 compute-0 python3.9[201554]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:16:30 compute-0 systemd[1]: Reloading.
Nov 22 08:16:30 compute-0 systemd-rc-local-generator[201583]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:30 compute-0 systemd-sysv-generator[201587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:30 compute-0 systemd[1]: Starting node_exporter container...
Nov 22 08:16:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8d0a057484c6f1b9a7dc5b1c0aa379fba2bc3bf4c58990f31f005b37ecc7ee/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8d0a057484c6f1b9a7dc5b1c0aa379fba2bc3bf4c58990f31f005b37ecc7ee/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:30 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.
Nov 22 08:16:30 compute-0 podman[201595]: 2025-11-22 08:16:30.874596604 +0000 UTC m=+0.356243809 container init 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.891Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.891Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.891Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.891Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.891Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.891Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.891Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=arp
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=bcache
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=bonding
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=cpu
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=edac
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=filefd
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=netclass
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=netdev
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=netstat
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=nfs
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=nvme
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=softnet
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=systemd
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=xfs
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.892Z caller=node_exporter.go:117 level=info collector=zfs
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.895Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 22 08:16:30 compute-0 node_exporter[201610]: ts=2025-11-22T08:16:30.896Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 22 08:16:30 compute-0 podman[201595]: 2025-11-22 08:16:30.911235009 +0000 UTC m=+0.392882174 container start 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:16:30 compute-0 podman[201595]: node_exporter
Nov 22 08:16:31 compute-0 systemd[1]: Started node_exporter container.
Nov 22 08:16:31 compute-0 sudo[201552]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:31 compute-0 podman[201619]: 2025-11-22 08:16:31.063707536 +0000 UTC m=+0.138220649 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:16:31 compute-0 sudo[201792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obapbcgezblwajzhgnlgkktcnzsfqrfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799391.21786-655-247370042808966/AnsiballZ_systemd.py'
Nov 22 08:16:31 compute-0 sudo[201792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:31 compute-0 python3.9[201794]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:16:31 compute-0 systemd[1]: Stopping node_exporter container...
Nov 22 08:16:31 compute-0 systemd[1]: libpod-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope: Deactivated successfully.
Nov 22 08:16:31 compute-0 podman[201798]: 2025-11-22 08:16:31.897150497 +0000 UTC m=+0.055530554 container died 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:16:31 compute-0 systemd[1]: 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001-6580bb428147fb49.timer: Deactivated successfully.
Nov 22 08:16:31 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.
Nov 22 08:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001-userdata-shm.mount: Deactivated successfully.
Nov 22 08:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f8d0a057484c6f1b9a7dc5b1c0aa379fba2bc3bf4c58990f31f005b37ecc7ee-merged.mount: Deactivated successfully.
Nov 22 08:16:31 compute-0 podman[201798]: 2025-11-22 08:16:31.979155732 +0000 UTC m=+0.137535789 container cleanup 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:16:31 compute-0 podman[201798]: node_exporter
Nov 22 08:16:31 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 22 08:16:32 compute-0 podman[201827]: node_exporter
Nov 22 08:16:32 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 22 08:16:32 compute-0 systemd[1]: Stopped node_exporter container.
Nov 22 08:16:32 compute-0 auditd[707]: Audit daemon rotating log files
Nov 22 08:16:32 compute-0 systemd[1]: Starting node_exporter container...
Nov 22 08:16:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8d0a057484c6f1b9a7dc5b1c0aa379fba2bc3bf4c58990f31f005b37ecc7ee/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8d0a057484c6f1b9a7dc5b1c0aa379fba2bc3bf4c58990f31f005b37ecc7ee/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.
Nov 22 08:16:32 compute-0 podman[201840]: 2025-11-22 08:16:32.211224026 +0000 UTC m=+0.120017790 container init 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.224Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.224Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.224Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=arp
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=bcache
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=bonding
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=cpu
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=edac
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=filefd
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=netclass
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=netdev
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=netstat
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=nfs
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=nvme
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=softnet
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=systemd
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=xfs
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.225Z caller=node_exporter.go:117 level=info collector=zfs
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.226Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 22 08:16:32 compute-0 node_exporter[201855]: ts=2025-11-22T08:16:32.227Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 22 08:16:32 compute-0 podman[201840]: 2025-11-22 08:16:32.236902614 +0000 UTC m=+0.145696378 container start 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:16:32 compute-0 podman[201840]: node_exporter
Nov 22 08:16:32 compute-0 systemd[1]: Started node_exporter container.
Nov 22 08:16:32 compute-0 sudo[201792]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:32 compute-0 podman[201865]: 2025-11-22 08:16:32.303166628 +0000 UTC m=+0.055532254 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:16:32 compute-0 sudo[202039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhowwexvkneeltpgpqeyraxczckroyrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799392.465033-663-168347941840968/AnsiballZ_stat.py'
Nov 22 08:16:32 compute-0 sudo[202039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:32 compute-0 python3.9[202041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:32 compute-0 sudo[202039]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:33 compute-0 sudo[202162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxnubbuezzpfaxctijjkecxsgwlwsvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799392.465033-663-168347941840968/AnsiballZ_copy.py'
Nov 22 08:16:33 compute-0 sudo[202162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:33 compute-0 python3.9[202164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799392.465033-663-168347941840968/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:16:33 compute-0 sudo[202162]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:34 compute-0 sudo[202314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtkrgvdooizunjxknryceemhhyrrqbic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799393.7895267-680-131502275719528/AnsiballZ_container_config_data.py'
Nov 22 08:16:34 compute-0 sudo[202314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:34 compute-0 python3.9[202316]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 22 08:16:34 compute-0 sudo[202314]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:34 compute-0 sudo[202466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auzbkmguihjouhugbvrbvlhscopywcms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799394.515012-689-265782374251411/AnsiballZ_container_config_hash.py'
Nov 22 08:16:34 compute-0 sudo[202466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:34 compute-0 python3.9[202468]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:16:34 compute-0 sudo[202466]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:35 compute-0 sudo[202618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbisbxnaunquewkufjrhugztwxtotldf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799395.2285357-699-154571212523427/AnsiballZ_edpm_container_manage.py'
Nov 22 08:16:35 compute-0 sudo[202618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:35 compute-0 python3[202620]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:16:37 compute-0 podman[202633]: 2025-11-22 08:16:37.211495573 +0000 UTC m=+1.316596651 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 22 08:16:37 compute-0 podman[202729]: 2025-11-22 08:16:37.348363333 +0000 UTC m=+0.048086227 container create 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:16:37 compute-0 podman[202729]: 2025-11-22 08:16:37.320313809 +0000 UTC m=+0.020036733 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 22 08:16:37 compute-0 python3[202620]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Nov 22 08:16:37 compute-0 sudo[202618]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:38 compute-0 sudo[202915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attgodiwncaaukoktmgnzkxfetmvzwel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799397.6457155-707-254811128518229/AnsiballZ_stat.py'
Nov 22 08:16:38 compute-0 sudo[202915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:38 compute-0 python3.9[202917]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:16:38 compute-0 sudo[202915]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:39 compute-0 sudo[203069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpdlclooyyikcjibanbcjkqrdqvwpdrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799399.1172879-716-190321206734304/AnsiballZ_file.py'
Nov 22 08:16:39 compute-0 sudo[203069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:39 compute-0 python3.9[203071]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:39 compute-0 sudo[203069]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:40 compute-0 sudo[203220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpinmiourqprlvyofyzdmerzzcdxtykd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799399.7485752-716-189856156802711/AnsiballZ_copy.py'
Nov 22 08:16:40 compute-0 sudo[203220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:41 compute-0 python3.9[203222]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799399.7485752-716-189856156802711/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:41 compute-0 sudo[203220]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:41 compute-0 sudo[203296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmgseezxyafnaqzsymnizhilqnbxndax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799399.7485752-716-189856156802711/AnsiballZ_systemd.py'
Nov 22 08:16:41 compute-0 sudo[203296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:41 compute-0 python3.9[203298]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:16:41 compute-0 systemd[1]: Reloading.
Nov 22 08:16:41 compute-0 systemd-rc-local-generator[203326]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:41 compute-0 systemd-sysv-generator[203330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:42 compute-0 sudo[203296]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:42 compute-0 sudo[203407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbbxlutnzccbjznjypjqfialevfmlahv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799399.7485752-716-189856156802711/AnsiballZ_systemd.py'
Nov 22 08:16:42 compute-0 sudo[203407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:42 compute-0 python3.9[203409]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:16:42 compute-0 systemd[1]: Reloading.
Nov 22 08:16:42 compute-0 systemd-rc-local-generator[203435]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:42 compute-0 systemd-sysv-generator[203441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:43 compute-0 systemd[1]: Starting podman_exporter container...
Nov 22 08:16:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe53e8591a835d38b0228dae9299ef14149a77f09e2f732f425b8e23a945e50/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe53e8591a835d38b0228dae9299ef14149a77f09e2f732f425b8e23a945e50/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.
Nov 22 08:16:43 compute-0 podman[203449]: 2025-11-22 08:16:43.21055332 +0000 UTC m=+0.148582960 container init 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:16:43 compute-0 podman_exporter[203465]: ts=2025-11-22T08:16:43.232Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 22 08:16:43 compute-0 podman_exporter[203465]: ts=2025-11-22T08:16:43.232Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 22 08:16:43 compute-0 podman_exporter[203465]: ts=2025-11-22T08:16:43.232Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 22 08:16:43 compute-0 podman_exporter[203465]: ts=2025-11-22T08:16:43.232Z caller=handler.go:105 level=info collector=container
Nov 22 08:16:43 compute-0 podman[203449]: 2025-11-22 08:16:43.243522162 +0000 UTC m=+0.181551792 container start 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:16:43 compute-0 podman[203449]: podman_exporter
Nov 22 08:16:43 compute-0 systemd[1]: Starting Podman API Service...
Nov 22 08:16:43 compute-0 systemd[1]: Started Podman API Service.
Nov 22 08:16:43 compute-0 systemd[1]: Started podman_exporter container.
Nov 22 08:16:43 compute-0 podman[203476]: time="2025-11-22T08:16:43Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 22 08:16:43 compute-0 podman[203476]: time="2025-11-22T08:16:43Z" level=info msg="Setting parallel job count to 25"
Nov 22 08:16:43 compute-0 podman[203476]: time="2025-11-22T08:16:43Z" level=info msg="Using sqlite as database backend"
Nov 22 08:16:43 compute-0 podman[203476]: time="2025-11-22T08:16:43Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 22 08:16:43 compute-0 podman[203476]: time="2025-11-22T08:16:43Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 22 08:16:43 compute-0 podman[203476]: time="2025-11-22T08:16:43Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 22 08:16:43 compute-0 sudo[203407]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:43 compute-0 podman[203476]: @ - - [22/Nov/2025:08:16:43 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 22 08:16:43 compute-0 podman[203476]: time="2025-11-22T08:16:43Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:16:43 compute-0 podman[203476]: @ - - [22/Nov/2025:08:16:43 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19585 "" "Go-http-client/1.1"
Nov 22 08:16:43 compute-0 podman_exporter[203465]: ts=2025-11-22T08:16:43.326Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 22 08:16:43 compute-0 podman_exporter[203465]: ts=2025-11-22T08:16:43.327Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 22 08:16:43 compute-0 podman_exporter[203465]: ts=2025-11-22T08:16:43.327Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 22 08:16:43 compute-0 podman[203474]: 2025-11-22 08:16:43.330225178 +0000 UTC m=+0.074070164 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:16:43 compute-0 systemd[1]: 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30-6a943b07cdcd381a.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:16:43 compute-0 systemd[1]: 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30-6a943b07cdcd381a.service: Failed with result 'exit-code'.
Nov 22 08:16:43 compute-0 sudo[203661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yipxwwdpiwuastpjrdmwpqneqrmpcoxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799403.476647-740-127625569702247/AnsiballZ_systemd.py'
Nov 22 08:16:43 compute-0 sudo[203661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:44 compute-0 python3.9[203663]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:16:44 compute-0 systemd[1]: Stopping podman_exporter container...
Nov 22 08:16:44 compute-0 podman[203476]: @ - - [22/Nov/2025:08:16:43 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Nov 22 08:16:44 compute-0 systemd[1]: libpod-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope: Deactivated successfully.
Nov 22 08:16:44 compute-0 podman[203667]: 2025-11-22 08:16:44.214021608 +0000 UTC m=+0.073959361 container died 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:16:44 compute-0 systemd[1]: 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30-6a943b07cdcd381a.timer: Deactivated successfully.
Nov 22 08:16:44 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.
Nov 22 08:16:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30-userdata-shm.mount: Deactivated successfully.
Nov 22 08:16:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe53e8591a835d38b0228dae9299ef14149a77f09e2f732f425b8e23a945e50-merged.mount: Deactivated successfully.
Nov 22 08:16:44 compute-0 podman[203667]: 2025-11-22 08:16:44.626374477 +0000 UTC m=+0.486312250 container cleanup 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:16:44 compute-0 podman[203667]: podman_exporter
Nov 22 08:16:44 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 22 08:16:44 compute-0 podman[203697]: podman_exporter
Nov 22 08:16:44 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 22 08:16:44 compute-0 systemd[1]: Stopped podman_exporter container.
Nov 22 08:16:44 compute-0 systemd[1]: Starting podman_exporter container...
Nov 22 08:16:44 compute-0 podman[203699]: 2025-11-22 08:16:44.722318091 +0000 UTC m=+0.057037246 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:16:44 compute-0 podman[203698]: 2025-11-22 08:16:44.729901434 +0000 UTC m=+0.068024415 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 22 08:16:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe53e8591a835d38b0228dae9299ef14149a77f09e2f732f425b8e23a945e50/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe53e8591a835d38b0228dae9299ef14149a77f09e2f732f425b8e23a945e50/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.
Nov 22 08:16:45 compute-0 podman[203728]: 2025-11-22 08:16:45.05031759 +0000 UTC m=+0.340681815 container init 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:16:45 compute-0 podman_exporter[203761]: ts=2025-11-22T08:16:45.073Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 22 08:16:45 compute-0 podman_exporter[203761]: ts=2025-11-22T08:16:45.073Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 22 08:16:45 compute-0 podman_exporter[203761]: ts=2025-11-22T08:16:45.073Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 22 08:16:45 compute-0 podman_exporter[203761]: ts=2025-11-22T08:16:45.073Z caller=handler.go:105 level=info collector=container
Nov 22 08:16:45 compute-0 podman[203476]: @ - - [22/Nov/2025:08:16:45 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 22 08:16:45 compute-0 podman[203476]: time="2025-11-22T08:16:45Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:16:45 compute-0 podman[203728]: 2025-11-22 08:16:45.081717548 +0000 UTC m=+0.372081683 container start 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:16:45 compute-0 podman[203728]: podman_exporter
Nov 22 08:16:45 compute-0 systemd[1]: Started podman_exporter container.
Nov 22 08:16:45 compute-0 podman[203476]: @ - - [22/Nov/2025:08:16:45 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19587 "" "Go-http-client/1.1"
Nov 22 08:16:45 compute-0 podman_exporter[203761]: ts=2025-11-22T08:16:45.177Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 22 08:16:45 compute-0 podman_exporter[203761]: ts=2025-11-22T08:16:45.178Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 22 08:16:45 compute-0 podman_exporter[203761]: ts=2025-11-22T08:16:45.178Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 22 08:16:45 compute-0 sudo[203661]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:45 compute-0 podman[203770]: 2025-11-22 08:16:45.241635233 +0000 UTC m=+0.147217331 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:16:45 compute-0 sudo[203944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfveemrtudpldjdroqtabcptpmcqhpmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799405.389375-748-30759607359253/AnsiballZ_stat.py'
Nov 22 08:16:45 compute-0 sudo[203944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:45 compute-0 python3.9[203946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:16:45 compute-0 sudo[203944]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:46 compute-0 sudo[204067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxnnkkwglchkjzcnfgnwckloejefohiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799405.389375-748-30759607359253/AnsiballZ_copy.py'
Nov 22 08:16:46 compute-0 sudo[204067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:46 compute-0 python3.9[204069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799405.389375-748-30759607359253/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:16:46 compute-0 sudo[204067]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:47 compute-0 sudo[204219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpgjnbysprrqdfaeocmannzbzifiyhkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799406.7646627-765-218201614927625/AnsiballZ_container_config_data.py'
Nov 22 08:16:47 compute-0 sudo[204219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:47 compute-0 python3.9[204221]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 22 08:16:47 compute-0 sudo[204219]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:47 compute-0 sudo[204371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkidsositjachajxwbehbpwwrtwdpvnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799407.5930905-774-182132822340487/AnsiballZ_container_config_hash.py'
Nov 22 08:16:47 compute-0 sudo[204371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:48 compute-0 python3.9[204373]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:16:48 compute-0 sudo[204371]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:48 compute-0 sudo[204523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grinvolcixjhkgozuwukzvmptgvvjtyk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799408.3492608-784-275393876094916/AnsiballZ_edpm_container_manage.py'
Nov 22 08:16:48 compute-0 sudo[204523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:48 compute-0 python3[204525]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:16:51 compute-0 podman[204581]: 2025-11-22 08:16:51.620457232 +0000 UTC m=+0.570447530 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:16:51 compute-0 systemd[1]: c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-fad5c9c3264056a.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:16:51 compute-0 systemd[1]: c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d-fad5c9c3264056a.service: Failed with result 'exit-code'.
Nov 22 08:16:51 compute-0 podman[204537]: 2025-11-22 08:16:51.948212045 +0000 UTC m=+2.917811184 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 22 08:16:52 compute-0 podman[204655]: 2025-11-22 08:16:52.14172707 +0000 UTC m=+0.080166179 container create 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:16:52 compute-0 podman[204655]: 2025-11-22 08:16:52.088936417 +0000 UTC m=+0.027375546 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 22 08:16:52 compute-0 python3[204525]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 22 08:16:52 compute-0 sudo[204523]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:53 compute-0 sudo[204842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzhxfxxucexaceiepuhchztjiwpxstws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799413.205527-792-77649419049679/AnsiballZ_stat.py'
Nov 22 08:16:53 compute-0 sudo[204842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:53 compute-0 python3.9[204844]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:16:53 compute-0 sudo[204842]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:54 compute-0 sudo[204996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brbhuenajqlqrumplkwfxzmrlgmxfwry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799414.1253037-801-63446861597736/AnsiballZ_file.py'
Nov 22 08:16:54 compute-0 sudo[204996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:54 compute-0 podman[204998]: 2025-11-22 08:16:54.548234127 +0000 UTC m=+0.102259554 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 08:16:54 compute-0 python3.9[204999]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:54 compute-0 sudo[204996]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:55 compute-0 sudo[205173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veqheachxoksnmsbtfthhzgsojkgkpif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799414.8311157-801-171321841678481/AnsiballZ_copy.py'
Nov 22 08:16:55 compute-0 sudo[205173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:55 compute-0 python3.9[205175]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799414.8311157-801-171321841678481/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:16:55 compute-0 sudo[205173]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:55 compute-0 sudo[205249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vppmmjkylyagfnkzovgprkcjussnprzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799414.8311157-801-171321841678481/AnsiballZ_systemd.py'
Nov 22 08:16:55 compute-0 sudo[205249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:56 compute-0 python3.9[205251]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:16:56 compute-0 systemd[1]: Reloading.
Nov 22 08:16:56 compute-0 systemd-rc-local-generator[205277]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:56 compute-0 systemd-sysv-generator[205280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:56 compute-0 sudo[205249]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:56 compute-0 sudo[205360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfgtvcnninusokcjkhwklpdnmvemttnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799414.8311157-801-171321841678481/AnsiballZ_systemd.py'
Nov 22 08:16:56 compute-0 sudo[205360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:57 compute-0 python3.9[205362]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:16:57 compute-0 systemd[1]: Reloading.
Nov 22 08:16:57 compute-0 systemd-rc-local-generator[205393]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:16:57 compute-0 systemd-sysv-generator[205397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:16:57 compute-0 systemd[1]: Starting openstack_network_exporter container...
Nov 22 08:16:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9eba506938e598c185037f1e1f4b114d8a8615e6e03bf4692bb2628fda624c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9eba506938e598c185037f1e1f4b114d8a8615e6e03bf4692bb2628fda624c/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9eba506938e598c185037f1e1f4b114d8a8615e6e03bf4692bb2628fda624c/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.
Nov 22 08:16:57 compute-0 podman[205403]: 2025-11-22 08:16:57.647547715 +0000 UTC m=+0.141632789 container init 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *bridge.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *coverage.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *datapath.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *iface.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *memory.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *ovnnorthd.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *ovn.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *ovsdbserver.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *pmd_perf.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *pmd_rxq.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: INFO    08:16:57 main.go:48: registering *vswitch.Collector
Nov 22 08:16:57 compute-0 openstack_network_exporter[205418]: NOTICE  08:16:57 main.go:76: listening on https://:9105/metrics
Nov 22 08:16:57 compute-0 podman[205403]: 2025-11-22 08:16:57.675208387 +0000 UTC m=+0.169293451 container start 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:16:57 compute-0 podman[205403]: openstack_network_exporter
Nov 22 08:16:57 compute-0 systemd[1]: Started openstack_network_exporter container.
Nov 22 08:16:57 compute-0 sudo[205360]: pam_unix(sudo:session): session closed for user root
Nov 22 08:16:57 compute-0 podman[205428]: 2025-11-22 08:16:57.767978353 +0000 UTC m=+0.079733557 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 22 08:16:58 compute-0 sudo[205599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fijkgzxxlxyhtcdagawuwrxewlaovzhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799417.907774-825-120849736967193/AnsiballZ_systemd.py'
Nov 22 08:16:58 compute-0 sudo[205599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:16:58 compute-0 python3.9[205601]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:16:58 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Nov 22 08:16:58 compute-0 systemd[1]: libpod-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope: Deactivated successfully.
Nov 22 08:16:58 compute-0 podman[205605]: 2025-11-22 08:16:58.585837642 +0000 UTC m=+0.053414612 container died 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Nov 22 08:16:58 compute-0 systemd[1]: 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4-7e3d1dabd280f275.timer: Deactivated successfully.
Nov 22 08:16:58 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.
Nov 22 08:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4-userdata-shm.mount: Deactivated successfully.
Nov 22 08:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae9eba506938e598c185037f1e1f4b114d8a8615e6e03bf4692bb2628fda624c-merged.mount: Deactivated successfully.
Nov 22 08:16:59 compute-0 podman[205605]: 2025-11-22 08:16:59.528546523 +0000 UTC m=+0.996123503 container cleanup 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Nov 22 08:16:59 compute-0 podman[205605]: openstack_network_exporter
Nov 22 08:16:59 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 22 08:16:59 compute-0 podman[205632]: openstack_network_exporter
Nov 22 08:16:59 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 22 08:16:59 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Nov 22 08:16:59 compute-0 systemd[1]: Starting openstack_network_exporter container...
Nov 22 08:16:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9eba506938e598c185037f1e1f4b114d8a8615e6e03bf4692bb2628fda624c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9eba506938e598c185037f1e1f4b114d8a8615e6e03bf4692bb2628fda624c/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9eba506938e598c185037f1e1f4b114d8a8615e6e03bf4692bb2628fda624c/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:16:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.
Nov 22 08:16:59 compute-0 podman[205645]: 2025-11-22 08:16:59.863369547 +0000 UTC m=+0.224061941 container init 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal)
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *bridge.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *coverage.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *datapath.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *iface.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *memory.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *ovnnorthd.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *ovn.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *ovsdbserver.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *pmd_perf.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *pmd_rxq.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: INFO    08:16:59 main.go:48: registering *vswitch.Collector
Nov 22 08:16:59 compute-0 openstack_network_exporter[205661]: NOTICE  08:16:59 main.go:76: listening on https://:9105/metrics
Nov 22 08:16:59 compute-0 podman[205645]: 2025-11-22 08:16:59.890658809 +0000 UTC m=+0.251351183 container start 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Nov 22 08:16:59 compute-0 podman[205645]: openstack_network_exporter
Nov 22 08:16:59 compute-0 systemd[1]: Started openstack_network_exporter container.
Nov 22 08:16:59 compute-0 podman[205671]: 2025-11-22 08:16:59.972057312 +0000 UTC m=+0.069702744 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 22 08:16:59 compute-0 sudo[205599]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:00 compute-0 sudo[205842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfdmdctiklbfflmolpddiqsuwzjdgrgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799420.1470788-833-180005405219072/AnsiballZ_find.py'
Nov 22 08:17:00 compute-0 sudo[205842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:00 compute-0 python3.9[205844]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:17:00 compute-0 sudo[205842]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:01 compute-0 sudo[205994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txmbolmgipaciodyjgccrukubeucahzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799421.0812936-843-156584949230643/AnsiballZ_podman_container_info.py'
Nov 22 08:17:01 compute-0 sudo[205994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:01 compute-0 python3.9[205996]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 22 08:17:01 compute-0 sudo[205994]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:02 compute-0 sudo[206172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ismbyvzikjryxfyemdmzxxjypgtegxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799421.9886777-851-7395934629102/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:02 compute-0 sudo[206172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:02 compute-0 podman[206133]: 2025-11-22 08:17:02.999666481 +0000 UTC m=+0.063700873 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:17:03 compute-0 python3.9[206182]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:03 compute-0 systemd[1]: Started libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope.
Nov 22 08:17:03 compute-0 podman[206186]: 2025-11-22 08:17:03.333283319 +0000 UTC m=+0.113851571 container exec 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 22 08:17:03 compute-0 podman[206206]: 2025-11-22 08:17:03.409676231 +0000 UTC m=+0.058751243 container exec_died 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:17:03 compute-0 podman[206186]: 2025-11-22 08:17:03.432766785 +0000 UTC m=+0.213335077 container exec_died 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 08:17:03 compute-0 systemd[1]: libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope: Deactivated successfully.
Nov 22 08:17:03 compute-0 sudo[206172]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:04 compute-0 sudo[206368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gatexcyjvhnmdspbanagnrgpuvuaknpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799423.7133396-859-278068408185659/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:04 compute-0 sudo[206368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:04 compute-0 python3.9[206370]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:04 compute-0 systemd[1]: Started libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope.
Nov 22 08:17:04 compute-0 podman[206371]: 2025-11-22 08:17:04.33671897 +0000 UTC m=+0.079102949 container exec 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 08:17:04 compute-0 podman[206371]: 2025-11-22 08:17:04.366315757 +0000 UTC m=+0.108699756 container exec_died 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:17:04 compute-0 systemd[1]: libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope: Deactivated successfully.
Nov 22 08:17:04 compute-0 sudo[206368]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:05 compute-0 sudo[206552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oczxojfnadnlcrjiyxluljrroxcpwvoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799424.9038618-867-173848232858505/AnsiballZ_file.py'
Nov 22 08:17:05 compute-0 sudo[206552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:05 compute-0 python3.9[206554]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:05 compute-0 sudo[206552]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:05 compute-0 sudo[206704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkdzazgblstokazwwgjumwngvvyvxedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799425.7604196-876-88202131978547/AnsiballZ_podman_container_info.py'
Nov 22 08:17:05 compute-0 sudo[206704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:06 compute-0 python3.9[206706]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 22 08:17:06 compute-0 sudo[206704]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:06 compute-0 sudo[206869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsifakdikdtkalxmihnirfbgjtwbrfcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799426.4119613-884-43003690164362/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:06 compute-0 sudo[206869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:06 compute-0 python3.9[206871]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:06 compute-0 systemd[1]: Started libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope.
Nov 22 08:17:06 compute-0 podman[206872]: 2025-11-22 08:17:06.954394361 +0000 UTC m=+0.082547887 container exec b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:17:06 compute-0 podman[206872]: 2025-11-22 08:17:06.988388192 +0000 UTC m=+0.116541688 container exec_died b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:17:07 compute-0 systemd[1]: libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope: Deactivated successfully.
Nov 22 08:17:07 compute-0 sudo[206869]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:07 compute-0 sudo[207053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwaqsftmefkbfihnjmqumtfgnhzvsspj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799427.1739628-892-101167320807548/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:07 compute-0 sudo[207053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:07 compute-0 python3.9[207055]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:07 compute-0 systemd[1]: Started libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope.
Nov 22 08:17:07 compute-0 podman[207056]: 2025-11-22 08:17:07.728687427 +0000 UTC m=+0.062371406 container exec b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:17:07 compute-0 podman[207056]: 2025-11-22 08:17:07.761982298 +0000 UTC m=+0.095666277 container exec_died b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:17:07 compute-0 systemd[1]: libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope: Deactivated successfully.
Nov 22 08:17:07 compute-0 sudo[207053]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:08 compute-0 sudo[207237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzdekwiqegqmkctenscfvxvpiptjmlat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799428.0875125-900-213655796543910/AnsiballZ_file.py'
Nov 22 08:17:08 compute-0 sudo[207237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:08 compute-0 python3.9[207239]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:08 compute-0 sudo[207237]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:09 compute-0 sudo[207389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tarhbvxvgiknswiqcujphonomzetzcmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799428.766118-909-64568429682169/AnsiballZ_podman_container_info.py'
Nov 22 08:17:09 compute-0 sudo[207389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:09 compute-0 python3.9[207391]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 22 08:17:09 compute-0 sudo[207389]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:09 compute-0 sudo[207554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojleuufocehusttqwhqqbmxrogrfsbwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799429.5340924-917-176399494913886/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:09 compute-0 sudo[207554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:17:09.946 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:17:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:17:09.947 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:17:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:17:09.947 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:17:10 compute-0 python3.9[207556]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:10 compute-0 systemd[1]: Started libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope.
Nov 22 08:17:10 compute-0 podman[207557]: 2025-11-22 08:17:10.103668742 +0000 UTC m=+0.082042413 container exec 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:17:10 compute-0 podman[207577]: 2025-11-22 08:17:10.170679428 +0000 UTC m=+0.052686782 container exec_died 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:17:10 compute-0 podman[207557]: 2025-11-22 08:17:10.176142273 +0000 UTC m=+0.154515944 container exec_died 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:17:10 compute-0 systemd[1]: libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope: Deactivated successfully.
Nov 22 08:17:10 compute-0 sudo[207554]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:10 compute-0 sudo[207739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfdemfbafeufsbpsrqjbbvfmezgjzzri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799430.3706477-925-148483207071845/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:10 compute-0 sudo[207739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:10 compute-0 python3.9[207741]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:10 compute-0 systemd[1]: Started libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope.
Nov 22 08:17:10 compute-0 podman[207742]: 2025-11-22 08:17:10.939300404 +0000 UTC m=+0.068124979 container exec 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 08:17:11 compute-0 podman[207762]: 2025-11-22 08:17:11.003629044 +0000 UTC m=+0.051776206 container exec_died 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 22 08:17:11 compute-0 podman[207742]: 2025-11-22 08:17:11.009956253 +0000 UTC m=+0.138780778 container exec_died 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:17:11 compute-0 systemd[1]: libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope: Deactivated successfully.
Nov 22 08:17:11 compute-0 sudo[207739]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:11 compute-0 sudo[207924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krdukudfiovgbznzzfgnekiqqafpyozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799431.211494-933-55884348509455/AnsiballZ_file.py'
Nov 22 08:17:11 compute-0 sudo[207924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:11 compute-0 python3.9[207926]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:11 compute-0 sudo[207924]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:12 compute-0 sudo[208076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdethyzpzgusjiflspzqcjrkmmdujtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799431.915505-942-61497143823592/AnsiballZ_podman_container_info.py'
Nov 22 08:17:12 compute-0 sudo[208076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:12 compute-0 python3.9[208078]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 22 08:17:12 compute-0 sudo[208076]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:12 compute-0 sudo[208242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkgpdiblisdhrdyqhbfdnnfbctjltle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799432.612308-950-205888345294893/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:12 compute-0 sudo[208242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:13 compute-0 python3.9[208244]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:13 compute-0 systemd[1]: Started libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope.
Nov 22 08:17:13 compute-0 podman[208245]: 2025-11-22 08:17:13.198814842 +0000 UTC m=+0.075770585 container exec c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:17:13 compute-0 podman[208245]: 2025-11-22 08:17:13.228294776 +0000 UTC m=+0.105250509 container exec_died c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:17:13 compute-0 systemd[1]: libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope: Deactivated successfully.
Nov 22 08:17:13 compute-0 sudo[208242]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:14 compute-0 nova_compute[189268]: 2025-11-22 08:17:14.386 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:14 compute-0 nova_compute[189268]: 2025-11-22 08:17:14.386 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:14 compute-0 sudo[208426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eytpbwcuuuogskhnaqpvwcwwkpspsjcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799434.4005845-958-251842884279830/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:14 compute-0 sudo[208426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:14 compute-0 python3.9[208428]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:14 compute-0 systemd[1]: Started libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope.
Nov 22 08:17:14 compute-0 podman[208429]: 2025-11-22 08:17:14.993527708 +0000 UTC m=+0.099408213 container exec c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:17:15 compute-0 nova_compute[189268]: 2025-11-22 08:17:15.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:15 compute-0 nova_compute[189268]: 2025-11-22 08:17:15.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:15 compute-0 podman[208462]: 2025-11-22 08:17:15.166604056 +0000 UTC m=+0.158576308 container exec_died c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:17:15 compute-0 podman[208429]: 2025-11-22 08:17:15.294607137 +0000 UTC m=+0.400487642 container exec_died c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 22 08:17:15 compute-0 systemd[1]: libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope: Deactivated successfully.
Nov 22 08:17:15 compute-0 podman[208448]: 2025-11-22 08:17:15.373602182 +0000 UTC m=+0.377848801 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:17:15 compute-0 sudo[208426]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:15 compute-0 podman[208446]: 2025-11-22 08:17:15.415507128 +0000 UTC m=+0.420083036 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 08:17:15 compute-0 podman[208502]: 2025-11-22 08:17:15.428079474 +0000 UTC m=+0.083748771 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.109 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.109 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.110 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.111 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.137 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.137 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.137 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.138 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.309 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.311 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5897MB free_disk=72.55876922607422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.311 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.311 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:17:16 compute-0 sudo[208675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsygvluiitqrrzlktiascwvkswshaplz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799435.5772836-966-209157894300499/AnsiballZ_file.py'
Nov 22 08:17:16 compute-0 sudo[208675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.389 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.389 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.423 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.436 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.438 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:17:16 compute-0 nova_compute[189268]: 2025-11-22 08:17:16.438 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:17:16 compute-0 python3.9[208677]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:16 compute-0 sudo[208675]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:17 compute-0 sudo[208827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuyhwxtwkkwafvwaaewzlslczfvatuuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799436.8780403-975-148738221096981/AnsiballZ_podman_container_info.py'
Nov 22 08:17:17 compute-0 sudo[208827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:17 compute-0 python3.9[208829]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 22 08:17:17 compute-0 sudo[208827]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:17 compute-0 sudo[208991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccdtgpectnykupsvrrfcsqlpqizzwdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799437.7143047-983-25404134397252/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:17 compute-0 sudo[208991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:18 compute-0 python3.9[208993]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:18 compute-0 systemd[1]: Started libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope.
Nov 22 08:17:18 compute-0 podman[208994]: 2025-11-22 08:17:18.313989823 +0000 UTC m=+0.086750355 container exec 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:17:18 compute-0 podman[208994]: 2025-11-22 08:17:18.353908692 +0000 UTC m=+0.126669194 container exec_died 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:17:18 compute-0 systemd[1]: libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope: Deactivated successfully.
Nov 22 08:17:18 compute-0 sudo[208991]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:18 compute-0 sudo[209175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlnrhazmpbiimkhjavbmcswhuretwfij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799438.578376-991-256986604076322/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:18 compute-0 sudo[209175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:19 compute-0 python3.9[209177]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:19 compute-0 systemd[1]: Started libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope.
Nov 22 08:17:19 compute-0 podman[209178]: 2025-11-22 08:17:19.149112891 +0000 UTC m=+0.083118613 container exec 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:17:19 compute-0 podman[209178]: 2025-11-22 08:17:19.181503877 +0000 UTC m=+0.115509599 container exec_died 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:17:19 compute-0 systemd[1]: libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope: Deactivated successfully.
Nov 22 08:17:19 compute-0 sudo[209175]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:19 compute-0 sudo[209358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nubzfjlupzewibsestdyvebfknqjgqht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799439.3798418-999-114087586956095/AnsiballZ_file.py'
Nov 22 08:17:19 compute-0 sudo[209358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:19 compute-0 python3.9[209360]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:19 compute-0 sudo[209358]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:20 compute-0 sudo[209510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zssjxyjhxpkrnjzyocavqvyejdnuvcto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799440.1554544-1008-191642007091919/AnsiballZ_podman_container_info.py'
Nov 22 08:17:20 compute-0 sudo[209510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:20 compute-0 python3.9[209512]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 22 08:17:20 compute-0 sudo[209510]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:21 compute-0 sudo[209675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiworxykfnsbtjexprqjtirloywvqzru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799440.8268893-1016-149820247661345/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:21 compute-0 sudo[209675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:21 compute-0 python3.9[209677]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:21 compute-0 systemd[1]: Started libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope.
Nov 22 08:17:21 compute-0 podman[209678]: 2025-11-22 08:17:21.400825437 +0000 UTC m=+0.086568839 container exec 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:17:21 compute-0 podman[209678]: 2025-11-22 08:17:21.437743492 +0000 UTC m=+0.123486904 container exec_died 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:17:21 compute-0 systemd[1]: libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope: Deactivated successfully.
Nov 22 08:17:21 compute-0 sudo[209675]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:21 compute-0 sudo[209869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhvybmfzwvokabafnpyizcesgtltarvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799441.6244364-1024-95690396372847/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:21 compute-0 sudo[209869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:21 compute-0 podman[209833]: 2025-11-22 08:17:21.915277923 +0000 UTC m=+0.063519728 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:17:22 compute-0 python3.9[209878]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:22 compute-0 systemd[1]: Started libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope.
Nov 22 08:17:22 compute-0 podman[209882]: 2025-11-22 08:17:22.23113577 +0000 UTC m=+0.110345593 container exec 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:17:22 compute-0 podman[209902]: 2025-11-22 08:17:22.296632772 +0000 UTC m=+0.054667117 container exec_died 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:17:22 compute-0 podman[209882]: 2025-11-22 08:17:22.304248868 +0000 UTC m=+0.183458681 container exec_died 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:17:22 compute-0 systemd[1]: libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope: Deactivated successfully.
Nov 22 08:17:22 compute-0 sudo[209869]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:22 compute-0 sudo[210063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbquwjsjbmyhrybxhlpbbdksrgffvxai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799442.51165-1032-1151975098448/AnsiballZ_file.py'
Nov 22 08:17:22 compute-0 sudo[210063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:22 compute-0 python3.9[210065]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:22 compute-0 sudo[210063]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:23 compute-0 sudo[210215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdhdjfmzmcbejvkrpxvrwijwiilfpvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799443.1787157-1041-116125327627045/AnsiballZ_podman_container_info.py'
Nov 22 08:17:23 compute-0 sudo[210215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:23 compute-0 python3.9[210217]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 22 08:17:23 compute-0 sudo[210215]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:24 compute-0 sudo[210380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knslufohnfnsjrigpqpfvlmvajxoiwro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799443.8918722-1049-14333601782539/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:24 compute-0 sudo[210380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:24 compute-0 python3.9[210382]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:24 compute-0 systemd[1]: Started libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope.
Nov 22 08:17:24 compute-0 podman[210383]: 2025-11-22 08:17:24.437629066 +0000 UTC m=+0.067676815 container exec 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9)
Nov 22 08:17:24 compute-0 podman[210383]: 2025-11-22 08:17:24.471868746 +0000 UTC m=+0.101916475 container exec_died 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Nov 22 08:17:24 compute-0 systemd[1]: libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope: Deactivated successfully.
Nov 22 08:17:24 compute-0 sudo[210380]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:24 compute-0 sudo[210573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dedujalberbjdvpjnuukhmfcpjmleuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799444.665874-1057-230489248235625/AnsiballZ_podman_container_exec.py'
Nov 22 08:17:24 compute-0 sudo[210573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:24 compute-0 podman[210538]: 2025-11-22 08:17:24.99366388 +0000 UTC m=+0.098015084 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 08:17:25 compute-0 python3.9[210582]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:17:25 compute-0 systemd[1]: Started libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope.
Nov 22 08:17:25 compute-0 podman[210592]: 2025-11-22 08:17:25.249641251 +0000 UTC m=+0.082251587 container exec 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Nov 22 08:17:25 compute-0 podman[210592]: 2025-11-22 08:17:25.285889977 +0000 UTC m=+0.118500323 container exec_died 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 22 08:17:25 compute-0 systemd[1]: libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope: Deactivated successfully.
Nov 22 08:17:25 compute-0 sudo[210573]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:26 compute-0 sudo[210777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdrnntxkshdjrcgvvbehkrxobhuvhiau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799446.1116261-1065-2382363412567/AnsiballZ_file.py'
Nov 22 08:17:26 compute-0 sudo[210777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:26 compute-0 python3.9[210779]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:26 compute-0 sudo[210777]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:27 compute-0 sudo[210929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjaycettwholotwjokkebztbpscswnzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799446.7860088-1074-234019741838203/AnsiballZ_file.py'
Nov 22 08:17:27 compute-0 sudo[210929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:27 compute-0 python3.9[210931]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:27 compute-0 sudo[210929]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:28 compute-0 sudo[211081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxldpefihfwmwhmzzgwbzzdkaefluolb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799447.4145358-1082-170145820491827/AnsiballZ_stat.py'
Nov 22 08:17:28 compute-0 sudo[211081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:28 compute-0 python3.9[211083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:28 compute-0 sudo[211081]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:28 compute-0 sudo[211204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnrzretwogxmhuenawlihukwtnbnsbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799447.4145358-1082-170145820491827/AnsiballZ_copy.py'
Nov 22 08:17:28 compute-0 sudo[211204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:28 compute-0 python3.9[211206]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799447.4145358-1082-170145820491827/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:28 compute-0 sudo[211204]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:29 compute-0 sudo[211356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfbdkqgyrgvavwyfczcvneexyojsusz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799449.2372165-1098-276905696067089/AnsiballZ_file.py'
Nov 22 08:17:29 compute-0 sudo[211356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:29 compute-0 python3.9[211358]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:29 compute-0 sudo[211356]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:30 compute-0 podman[211452]: 2025-11-22 08:17:30.137285446 +0000 UTC m=+0.085407937 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 08:17:30 compute-0 sudo[211527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dawlxndgfdgmxilczbgidrialhthvxwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799449.9133952-1106-272888583771157/AnsiballZ_stat.py'
Nov 22 08:17:30 compute-0 sudo[211527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:30 compute-0 python3.9[211529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:30 compute-0 sudo[211527]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:30 compute-0 sudo[211605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spapbomyxfeboegfbzaiqchigjqegnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799449.9133952-1106-272888583771157/AnsiballZ_file.py'
Nov 22 08:17:30 compute-0 sudo[211605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:30 compute-0 python3.9[211607]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:30 compute-0 sudo[211605]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:31 compute-0 sudo[211757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itantaepbrjqlkxjhihqrzsqdzaauupo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799451.0387316-1118-60525866270615/AnsiballZ_stat.py'
Nov 22 08:17:31 compute-0 sudo[211757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:31 compute-0 python3.9[211759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:31 compute-0 sudo[211757]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:31 compute-0 sudo[211835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thbctptbmcaenjbtxtqadrwzcbsueaak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799451.0387316-1118-60525866270615/AnsiballZ_file.py'
Nov 22 08:17:31 compute-0 sudo[211835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:31 compute-0 python3.9[211837]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.du8ndggz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:31 compute-0 sudo[211835]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:32 compute-0 sudo[211987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxphyxuoeelblqzixrxtrnnlwzsfyol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799452.1097286-1130-89786286995849/AnsiballZ_stat.py'
Nov 22 08:17:32 compute-0 sudo[211987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:32 compute-0 python3.9[211989]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:32 compute-0 sudo[211987]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:32 compute-0 sudo[212065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcstdhigwmlktnrwhjopyzujfuoexnpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799452.1097286-1130-89786286995849/AnsiballZ_file.py'
Nov 22 08:17:32 compute-0 sudo[212065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:33 compute-0 python3.9[212067]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:33 compute-0 sudo[212065]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:33 compute-0 podman[212068]: 2025-11-22 08:17:33.091235492 +0000 UTC m=+0.050799958 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:17:33 compute-0 sudo[212241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmlyjhrjgedtfhqhfkubhooxmdmjvvbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799453.2511134-1143-122490108409494/AnsiballZ_command.py'
Nov 22 08:17:33 compute-0 sudo[212241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:33 compute-0 python3.9[212243]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:17:33 compute-0 sudo[212241]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:34 compute-0 sudo[212394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdrnfiokxkktmofosuxthseymzcjcpza ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799453.8794284-1151-38665250587867/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 08:17:34 compute-0 sudo[212394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:34 compute-0 python3[212396]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 08:17:34 compute-0 sudo[212394]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:35 compute-0 sudo[212546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpxbyclofqgzxsrymlclpkmqwzhfgbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799454.7510314-1159-175667277519746/AnsiballZ_stat.py'
Nov 22 08:17:35 compute-0 sudo[212546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:35 compute-0 python3.9[212548]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:35 compute-0 sudo[212546]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:35 compute-0 sudo[212624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qplndfaltwwgzjgdairyakpyspvirtir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799454.7510314-1159-175667277519746/AnsiballZ_file.py'
Nov 22 08:17:35 compute-0 sudo[212624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:35 compute-0 python3.9[212626]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:35 compute-0 sudo[212624]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:36 compute-0 sudo[212776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myghamobdjhndmdnvxrswvwqulpiuzxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799455.8362746-1171-271191843971023/AnsiballZ_stat.py'
Nov 22 08:17:36 compute-0 sudo[212776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:36 compute-0 python3.9[212778]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:36 compute-0 sudo[212776]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:36 compute-0 sudo[212854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtxxnksybbjcohqsmwiwazomlnvztfnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799455.8362746-1171-271191843971023/AnsiballZ_file.py'
Nov 22 08:17:36 compute-0 sudo[212854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:36 compute-0 python3.9[212856]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:36 compute-0 sudo[212854]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:37 compute-0 sudo[213006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuhuyacvjidzasrabodirsktaflzcezo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799457.0016892-1183-163578389304817/AnsiballZ_stat.py'
Nov 22 08:17:37 compute-0 sudo[213006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:37 compute-0 python3.9[213008]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:37 compute-0 sudo[213006]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:38 compute-0 sudo[213084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amryyfxlfqdzxqqlvctrwollobeqvxtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799457.0016892-1183-163578389304817/AnsiballZ_file.py'
Nov 22 08:17:38 compute-0 sudo[213084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:38 compute-0 python3.9[213086]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:38 compute-0 sudo[213084]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:38 compute-0 sudo[213236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqpzbcyilxrkiqxshxakgvkwgwgcrays ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799458.591617-1195-185760259802323/AnsiballZ_stat.py'
Nov 22 08:17:38 compute-0 sudo[213236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:39 compute-0 python3.9[213238]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:39 compute-0 sudo[213236]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:39 compute-0 sudo[213314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocxpdpshtlcypabkjvivuzbnjmiiaxyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799458.591617-1195-185760259802323/AnsiballZ_file.py'
Nov 22 08:17:39 compute-0 sudo[213314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:40 compute-0 python3.9[213316]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:40 compute-0 sudo[213314]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:40 compute-0 sudo[213466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbnqhludyvstqpfabdlquslmlejhfapf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799460.3279626-1207-230786516520933/AnsiballZ_stat.py'
Nov 22 08:17:40 compute-0 sudo[213466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:40 compute-0 python3.9[213468]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:17:40 compute-0 sudo[213466]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:41 compute-0 sudo[213591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yumtrpuwlpqnmtxprwsrdfxzjqolcrsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799460.3279626-1207-230786516520933/AnsiballZ_copy.py'
Nov 22 08:17:41 compute-0 sudo[213591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:41 compute-0 python3.9[213593]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799460.3279626-1207-230786516520933/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:41 compute-0 sudo[213591]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:41 compute-0 sudo[213743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-funzyteylgjscrbntrbwnlxvnlbgijzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799461.6132684-1222-96873036103214/AnsiballZ_file.py'
Nov 22 08:17:41 compute-0 sudo[213743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:42 compute-0 python3.9[213745]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:42 compute-0 sudo[213743]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:42 compute-0 sudo[213895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quqdqbrlsyiupjvsqtlisiuojaubpwij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799462.3159852-1230-22564871159005/AnsiballZ_command.py'
Nov 22 08:17:42 compute-0 sudo[213895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:42 compute-0 python3.9[213897]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:17:42 compute-0 sudo[213895]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:43 compute-0 sudo[214050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujzwchddhlxbmbqrtvetgdpkxxqnlhgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799463.0615535-1238-176727670056729/AnsiballZ_blockinfile.py'
Nov 22 08:17:43 compute-0 sudo[214050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:43 compute-0 python3.9[214052]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:43 compute-0 sudo[214050]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:44 compute-0 sudo[214202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iitryljqxultnvedlhyueludavqjhseb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799464.0188293-1247-170236895968424/AnsiballZ_command.py'
Nov 22 08:17:44 compute-0 sudo[214202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:44 compute-0 python3.9[214204]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:17:44 compute-0 sudo[214202]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:45 compute-0 sudo[214355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhdwnhjradxaisgtohwasgwnrteqvunc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799464.82928-1255-245840982318879/AnsiballZ_stat.py'
Nov 22 08:17:45 compute-0 sudo[214355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:45 compute-0 python3.9[214357]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:17:45 compute-0 sudo[214355]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:45 compute-0 sudo[214539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnujtgbhdbhfmvpnaxfxbmfqpzjkitmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799465.5674684-1263-206385839016291/AnsiballZ_command.py'
Nov 22 08:17:45 compute-0 sudo[214539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:45 compute-0 podman[214493]: 2025-11-22 08:17:45.856115494 +0000 UTC m=+0.054698628 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:17:45 compute-0 podman[214495]: 2025-11-22 08:17:45.87328222 +0000 UTC m=+0.067573613 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 08:17:45 compute-0 podman[214484]: 2025-11-22 08:17:45.887287106 +0000 UTC m=+0.083245466 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:17:46 compute-0 python3.9[214563]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:17:46 compute-0 sudo[214539]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:46 compute-0 sudo[214726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhnonrbscvnhwkrpxmnlhyhhfqdrbbid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799466.3039901-1271-227113329665836/AnsiballZ_file.py'
Nov 22 08:17:46 compute-0 sudo[214726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:46 compute-0 python3.9[214728]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:17:46 compute-0 sudo[214726]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:47 compute-0 sshd-session[214494]: Invalid user loginuser from 80.94.92.164 port 47618
Nov 22 08:17:47 compute-0 sshd-session[189638]: Connection closed by 192.168.122.30 port 58694
Nov 22 08:17:47 compute-0 sshd-session[189635]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:17:47 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Nov 22 08:17:47 compute-0 systemd[1]: session-26.scope: Consumed 1min 41.149s CPU time.
Nov 22 08:17:47 compute-0 systemd-logind[826]: Session 26 logged out. Waiting for processes to exit.
Nov 22 08:17:47 compute-0 systemd-logind[826]: Removed session 26.
Nov 22 08:17:47 compute-0 sshd-session[214494]: Connection closed by invalid user loginuser 80.94.92.164 port 47618 [preauth]
Nov 22 08:17:52 compute-0 podman[214756]: 2025-11-22 08:17:52.109572501 +0000 UTC m=+0.057234491 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:17:54 compute-0 sshd-session[214779]: Accepted publickey for zuul from 192.168.122.30 port 34404 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:17:54 compute-0 systemd-logind[826]: New session 27 of user zuul.
Nov 22 08:17:54 compute-0 systemd[1]: Started Session 27 of User zuul.
Nov 22 08:17:54 compute-0 sshd-session[214779]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:17:55 compute-0 podman[214784]: 2025-11-22 08:17:55.145521154 +0000 UTC m=+0.098454490 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:17:55 compute-0 sudo[214959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztmjrxkufgfikekscjrcxovoeqteyrlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799475.07316-24-236809265046302/AnsiballZ_systemd_service.py'
Nov 22 08:17:55 compute-0 sudo[214959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:17:56 compute-0 python3.9[214961]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:17:56 compute-0 systemd[1]: Reloading.
Nov 22 08:17:56 compute-0 systemd-rc-local-generator[214989]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:17:56 compute-0 systemd-sysv-generator[214993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:17:56 compute-0 sudo[214959]: pam_unix(sudo:session): session closed for user root
Nov 22 08:17:57 compute-0 python3.9[215146]: ansible-ansible.builtin.service_facts Invoked
Nov 22 08:17:57 compute-0 network[215163]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 08:17:57 compute-0 network[215164]: 'network-scripts' will be removed from distribution in near future.
Nov 22 08:17:57 compute-0 network[215165]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 08:17:59 compute-0 podman[203476]: time="2025-11-22T08:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:17:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22540 "" "Go-http-client/1.1"
Nov 22 08:17:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3410 "" "Go-http-client/1.1"
Nov 22 08:18:00 compute-0 podman[215278]: 2025-11-22 08:18:00.241884565 +0000 UTC m=+0.063336242 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., release=1755695350)
Nov 22 08:18:00 compute-0 sudo[215456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfyflqpuomokpumqqwvoiyncjszkgjzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799480.672174-47-126392378793665/AnsiballZ_systemd_service.py'
Nov 22 08:18:00 compute-0 sudo[215456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:01 compute-0 python3.9[215458]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:18:01 compute-0 sudo[215456]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:01 compute-0 openstack_network_exporter[205661]: ERROR   08:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:18:01 compute-0 openstack_network_exporter[205661]: ERROR   08:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:18:01 compute-0 openstack_network_exporter[205661]: ERROR   08:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:18:01 compute-0 openstack_network_exporter[205661]: ERROR   08:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:18:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:18:01 compute-0 openstack_network_exporter[205661]: ERROR   08:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:18:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:18:02 compute-0 sudo[215613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgddsizmqtanusnswrgnsrmbftwpqxgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799481.6879904-57-204260714618892/AnsiballZ_file.py'
Nov 22 08:18:02 compute-0 sudo[215613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:03 compute-0 python3.9[215615]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:03 compute-0 sudo[215613]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:03 compute-0 sudo[215780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhauzwbbhquljmkzambvhhncbzeudlgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799483.1973908-65-158548291403575/AnsiballZ_file.py'
Nov 22 08:18:03 compute-0 sudo[215780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:03 compute-0 podman[215739]: 2025-11-22 08:18:03.553355691 +0000 UTC m=+0.091031940 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:18:03 compute-0 python3.9[215789]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:03 compute-0 sudo[215780]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:05 compute-0 sudo[215939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjptvcbqfhdricrorruvyyaeuecjrygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799484.9141548-74-194284299739686/AnsiballZ_command.py'
Nov 22 08:18:05 compute-0 sudo[215939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:05 compute-0 python3.9[215941]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:18:05 compute-0 sudo[215939]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:06 compute-0 python3.9[216093]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:18:07 compute-0 sudo[216243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxfwhfwuwlldofmacbuzbxmysunbtwqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799486.6865206-92-143827548428359/AnsiballZ_systemd_service.py'
Nov 22 08:18:07 compute-0 sudo[216243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:07 compute-0 python3.9[216245]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:18:07 compute-0 systemd[1]: Reloading.
Nov 22 08:18:07 compute-0 systemd-rc-local-generator[216265]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:18:07 compute-0 systemd-sysv-generator[216271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:18:07 compute-0 sudo[216243]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:08 compute-0 sudo[216429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukgysuwupfjyqnjhgyqbdfjkixxkcyib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799487.959673-100-170597745781370/AnsiballZ_command.py'
Nov 22 08:18:08 compute-0 sudo[216429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:08 compute-0 python3.9[216431]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:18:08 compute-0 sudo[216429]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:09 compute-0 sudo[216582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rupfhomipfcmxlxxbokebtutvxljirde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799488.9861429-109-251962224551268/AnsiballZ_file.py'
Nov 22 08:18:09 compute-0 sudo[216582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:09 compute-0 python3.9[216584]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:18:09 compute-0 sudo[216582]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:18:09.947 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:18:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:18:09.949 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:18:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:18:09.949 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:18:10 compute-0 python3.9[216734]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:18:11 compute-0 python3.9[216886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:11 compute-0 python3.9[217007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799490.5525367-125-26403349602455/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:18:12 compute-0 sudo[217157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbtawhtfqmfxnkixldextwpaaljaxdik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799492.094773-143-170870723830268/AnsiballZ_getent.py'
Nov 22 08:18:12 compute-0 sudo[217157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:12 compute-0 python3.9[217159]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 22 08:18:12 compute-0 sudo[217157]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:14 compute-0 python3.9[217310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:14 compute-0 python3.9[217431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799493.5877433-171-151473552738976/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:15 compute-0 nova_compute[189268]: 2025-11-22 08:18:15.428 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:15 compute-0 podman[217555]: 2025-11-22 08:18:15.97688895 +0000 UTC m=+0.060810910 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:18:16 compute-0 podman[217557]: 2025-11-22 08:18:16.009490838 +0000 UTC m=+0.088544750 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 08:18:16 compute-0 podman[217556]: 2025-11-22 08:18:16.016210668 +0000 UTC m=+0.095125356 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:18:16 compute-0 python3.9[217619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.278 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.279 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5833MB free_disk=72.5579948425293GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.279 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.280 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.335 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.336 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.358 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.373 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.375 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:18:16 compute-0 nova_compute[189268]: 2025-11-22 08:18:16.376 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:18:16 compute-0 python3.9[217762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799495.6794248-171-217670866999655/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:17 compute-0 nova_compute[189268]: 2025-11-22 08:18:17.376 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:17 compute-0 nova_compute[189268]: 2025-11-22 08:18:17.377 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:18:17 compute-0 nova_compute[189268]: 2025-11-22 08:18:17.377 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:18:17 compute-0 nova_compute[189268]: 2025-11-22 08:18:17.390 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:18:17 compute-0 nova_compute[189268]: 2025-11-22 08:18:17.390 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:17 compute-0 nova_compute[189268]: 2025-11-22 08:18:17.391 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:17 compute-0 nova_compute[189268]: 2025-11-22 08:18:17.391 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:18:17 compute-0 python3.9[217912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:18 compute-0 nova_compute[189268]: 2025-11-22 08:18:18.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:18:18 compute-0 python3.9[218033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799496.833886-171-142479646208380/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:19 compute-0 python3.9[218183]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:18:19 compute-0 python3.9[218335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:18:20 compute-0 python3.9[218487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:21 compute-0 python3.9[218608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799500.1316662-230-157590694380840/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:21 compute-0 python3.9[218758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.086 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.087 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.087 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.088 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b769730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:18:22.104 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:18:22 compute-0 python3.9[218834]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:22 compute-0 podman[218836]: 2025-11-22 08:18:22.336878765 +0000 UTC m=+0.066391820 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:18:22 compute-0 python3.9[219005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:23 compute-0 python3.9[219126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799502.4125845-230-13839165521349/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:24 compute-0 python3.9[219276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:24 compute-0 python3.9[219397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799503.7395449-230-128484711270872/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:25 compute-0 python3.9[219547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:25 compute-0 podman[219642]: 2025-11-22 08:18:25.673867296 +0000 UTC m=+0.081789377 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:18:25 compute-0 python3.9[219679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799504.8711362-230-168550666109402/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:26 compute-0 python3.9[219842]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:26 compute-0 python3.9[219963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799505.9387138-230-242648481558336/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:28 compute-0 python3.9[220113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:28 compute-0 python3.9[220189]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:29 compute-0 podman[203476]: time="2025-11-22T08:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:18:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22540 "" "Go-http-client/1.1"
Nov 22 08:18:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3414 "" "Go-http-client/1.1"
Nov 22 08:18:29 compute-0 sudo[220339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoabnvsbegbiuyzmodwskvkjvpdwpupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799509.7161674-325-223579210418711/AnsiballZ_file.py'
Nov 22 08:18:29 compute-0 sudo[220339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:30 compute-0 python3.9[220341]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:30 compute-0 sudo[220339]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:30 compute-0 sudo[220506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hngeqoupgaiepyeqqmmdquwwhhbeffvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799510.3863618-333-255635331265918/AnsiballZ_file.py'
Nov 22 08:18:30 compute-0 sudo[220506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:30 compute-0 podman[220465]: 2025-11-22 08:18:30.673333532 +0000 UTC m=+0.060312536 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Nov 22 08:18:30 compute-0 python3.9[220512]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:30 compute-0 sudo[220506]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:31 compute-0 sudo[220662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmccagprwiorwunptohupwmcmobibeje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799511.0350704-341-25044863341052/AnsiballZ_file.py'
Nov 22 08:18:31 compute-0 sudo[220662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:31 compute-0 openstack_network_exporter[205661]: ERROR   08:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:18:31 compute-0 openstack_network_exporter[205661]: ERROR   08:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:18:31 compute-0 openstack_network_exporter[205661]: ERROR   08:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:18:31 compute-0 openstack_network_exporter[205661]: ERROR   08:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:18:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:18:31 compute-0 openstack_network_exporter[205661]: ERROR   08:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:18:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:18:31 compute-0 python3.9[220664]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:18:31 compute-0 sudo[220662]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:31 compute-0 sudo[220815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqjtdaeqqufgjrcpffhpsefemdbshnri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799511.7018745-349-46372362677430/AnsiballZ_stat.py'
Nov 22 08:18:31 compute-0 sudo[220815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:32 compute-0 python3.9[220817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:32 compute-0 sudo[220815]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:32 compute-0 sudo[220938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmlzdkrmaluzhlxccqjlmzlpgjixowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799511.7018745-349-46372362677430/AnsiballZ_copy.py'
Nov 22 08:18:32 compute-0 sudo[220938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:32 compute-0 python3.9[220940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799511.7018745-349-46372362677430/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:18:32 compute-0 sudo[220938]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:32 compute-0 sudo[221014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtfekzlcqsyvrczgfstsaqxenngvzday ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799511.7018745-349-46372362677430/AnsiballZ_stat.py'
Nov 22 08:18:32 compute-0 sudo[221014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:33 compute-0 python3.9[221016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:33 compute-0 sudo[221014]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:33 compute-0 sudo[221137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmzqikdatifyjsexemajgqvwlkpsclwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799511.7018745-349-46372362677430/AnsiballZ_copy.py'
Nov 22 08:18:33 compute-0 sudo[221137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:33 compute-0 podman[221139]: 2025-11-22 08:18:33.702303624 +0000 UTC m=+0.073904383 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:18:33 compute-0 python3.9[221140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799511.7018745-349-46372362677430/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:18:33 compute-0 sudo[221137]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:34 compute-0 sudo[221311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttqrnagtztmekklhnfolfhhgmqgtyxxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799514.005247-349-255253395642132/AnsiballZ_stat.py'
Nov 22 08:18:34 compute-0 sudo[221311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:34 compute-0 python3.9[221313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:18:34 compute-0 sudo[221311]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:34 compute-0 sudo[221434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvjpctckpqaxcgiwwdxbrjoprsjtiwbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799514.005247-349-255253395642132/AnsiballZ_copy.py'
Nov 22 08:18:34 compute-0 sudo[221434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:34 compute-0 python3.9[221436]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799514.005247-349-255253395642132/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 08:18:35 compute-0 sudo[221434]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:35 compute-0 sudo[221586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdawjwqzynbpleqpcmfxbwhkgjgvickv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799515.2835672-391-161751344565359/AnsiballZ_container_config_data.py'
Nov 22 08:18:35 compute-0 sudo[221586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:35 compute-0 python3.9[221588]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Nov 22 08:18:35 compute-0 sudo[221586]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:36 compute-0 sudo[221738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ertpknpnyjaddcnbqmtnswlvdjnlvdwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799516.1600351-400-43692862048067/AnsiballZ_container_config_hash.py'
Nov 22 08:18:36 compute-0 sudo[221738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:36 compute-0 python3.9[221740]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:18:36 compute-0 sudo[221738]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:37 compute-0 sudo[221890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isguopulzxcfzxntqvwydubhhqbdzozn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799517.035942-410-125790135176695/AnsiballZ_edpm_container_manage.py'
Nov 22 08:18:37 compute-0 sudo[221890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:37 compute-0 python3[221892]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:18:38 compute-0 podman[221929]: 2025-11-22 08:18:38.012734836 +0000 UTC m=+0.059244066 container create c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 08:18:38 compute-0 podman[221929]: 2025-11-22 08:18:37.978591125 +0000 UTC m=+0.025100365 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 22 08:18:38 compute-0 python3[221892]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Nov 22 08:18:38 compute-0 sudo[221890]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:38 compute-0 sudo[222117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djmflygcxqpyqeyspdiwghmqkslfdbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799518.2978785-418-191768821689783/AnsiballZ_stat.py'
Nov 22 08:18:38 compute-0 sudo[222117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:38 compute-0 python3.9[222119]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:18:38 compute-0 sudo[222117]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:39 compute-0 sudo[222271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuyxicfwibzwgqydbyxsaewlnohsouvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799518.997723-427-244198175503114/AnsiballZ_file.py'
Nov 22 08:18:39 compute-0 sudo[222271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:39 compute-0 python3.9[222273]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:39 compute-0 sudo[222271]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:40 compute-0 sudo[222422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqkgviwyltqbpbkoysvstgtegvzuytux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799520.207925-427-13469952703722/AnsiballZ_copy.py'
Nov 22 08:18:40 compute-0 sudo[222422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:40 compute-0 python3.9[222424]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799520.207925-427-13469952703722/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:40 compute-0 sudo[222422]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:41 compute-0 sudo[222498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxplxavrkjbfdzbqskimygtbpjnqiwao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799520.207925-427-13469952703722/AnsiballZ_systemd.py'
Nov 22 08:18:41 compute-0 sudo[222498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:41 compute-0 python3.9[222500]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:18:41 compute-0 systemd[1]: Reloading.
Nov 22 08:18:41 compute-0 systemd-rc-local-generator[222526]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:18:41 compute-0 systemd-sysv-generator[222531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:18:42 compute-0 sudo[222498]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:42 compute-0 sudo[222609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnsoctmmhuefxdffglbrcmybewthguto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799520.207925-427-13469952703722/AnsiballZ_systemd.py'
Nov 22 08:18:42 compute-0 sudo[222609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:42 compute-0 python3.9[222611]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:18:42 compute-0 systemd[1]: Reloading.
Nov 22 08:18:42 compute-0 systemd-sysv-generator[222645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:18:42 compute-0 systemd-rc-local-generator[222642]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:18:43 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 22 08:18:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.
Nov 22 08:18:43 compute-0 podman[222652]: 2025-11-22 08:18:43.233628908 +0000 UTC m=+0.176351336 container init c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + sudo -E kolla_set_configs
Nov 22 08:18:43 compute-0 podman[222652]: 2025-11-22 08:18:43.259073693 +0000 UTC m=+0.201796111 container start c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 08:18:43 compute-0 sudo[222673]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 08:18:43 compute-0 sudo[222673]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:18:43 compute-0 sudo[222673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:18:43 compute-0 podman[222652]: ceilometer_agent_ipmi
Nov 22 08:18:43 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 22 08:18:43 compute-0 sudo[222609]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Validating config file
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Copying service configuration files
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: INFO:__main__:Writing out command to execute
Nov 22 08:18:43 compute-0 sudo[222673]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: ++ cat /run_command
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + ARGS=
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + sudo kolla_copy_cacerts
Nov 22 08:18:43 compute-0 podman[222674]: 2025-11-22 08:18:43.346247131 +0000 UTC m=+0.078053101 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:18:43 compute-0 sudo[222696]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 08:18:43 compute-0 sudo[222696]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:18:43 compute-0 sudo[222696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:18:43 compute-0 systemd[1]: c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-547b59a2f17bf799.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:18:43 compute-0 systemd[1]: c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-547b59a2f17bf799.service: Failed with result 'exit-code'.
Nov 22 08:18:43 compute-0 sudo[222696]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + [[ ! -n '' ]]
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + . kolla_extend_start
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + umask 0022
Nov 22 08:18:43 compute-0 ceilometer_agent_ipmi[222667]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 22 08:18:43 compute-0 sudo[222848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mremawzyqpyqzwapntedlcnospuxmona ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799523.5949655-453-275435408541078/AnsiballZ_container_config_data.py'
Nov 22 08:18:43 compute-0 sudo[222848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:44 compute-0 python3.9[222850]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Nov 22 08:18:44 compute-0 sudo[222848]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.238 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.239 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.240 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.241 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.242 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.243 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.244 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.247 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.248 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.249 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.250 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.251 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.252 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.253 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.273 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.275 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.276 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 22 08:18:44 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:44.396 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpop9so_z9/privsep.sock']
Nov 22 08:18:44 compute-0 sudo[222902]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpop9so_z9/privsep.sock
Nov 22 08:18:44 compute-0 sudo[222902]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:18:44 compute-0 sudo[222902]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:18:44 compute-0 sudo[223007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jntcxjmrtuppqvgrzfnnzueqijbigzxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799524.3989875-462-180563280941857/AnsiballZ_container_config_hash.py'
Nov 22 08:18:44 compute-0 sudo[223007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:44 compute-0 python3.9[223009]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 08:18:44 compute-0 sudo[223007]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:45 compute-0 sudo[222902]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.187 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.187 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpop9so_z9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.057 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.065 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.068 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.068 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.308 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.308 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.309 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.310 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.311 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.314 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.314 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.314 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.314 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.314 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.314 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.314 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.315 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.315 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.315 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.315 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.315 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.315 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.315 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.316 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.317 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.318 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.319 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.319 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.319 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.319 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.319 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.319 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.321 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.321 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.322 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.323 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.324 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.325 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.326 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.327 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.328 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.329 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.330 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.331 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.332 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.333 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.334 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.335 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.336 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.337 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.338 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.338 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.338 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.338 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.338 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 22 08:18:45 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:45.341 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 22 08:18:45 compute-0 sudo[223165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cadoadugekdvbeocirbxofnecyxfgyub ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799525.3546813-472-212507659841848/AnsiballZ_edpm_container_manage.py'
Nov 22 08:18:45 compute-0 sudo[223165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:45 compute-0 python3[223167]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 08:18:46 compute-0 podman[223194]: 2025-11-22 08:18:46.106593995 +0000 UTC m=+0.058167955 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:18:46 compute-0 podman[223192]: 2025-11-22 08:18:46.113221113 +0000 UTC m=+0.070929708 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 22 08:18:46 compute-0 podman[223196]: 2025-11-22 08:18:46.126862781 +0000 UTC m=+0.077809174 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:18:46 compute-0 podman[223231]: 2025-11-22 08:18:46.141571689 +0000 UTC m=+0.062359203 container create 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, container_name=kepler, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public)
Nov 22 08:18:46 compute-0 podman[223231]: 2025-11-22 08:18:46.108609502 +0000 UTC m=+0.029397036 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 22 08:18:46 compute-0 python3[223167]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Nov 22 08:18:46 compute-0 sudo[223165]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:46 compute-0 sudo[223449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzqsygabbyseadavefqossizwiilkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799526.4224377-480-192321913797375/AnsiballZ_stat.py'
Nov 22 08:18:46 compute-0 sudo[223449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:46 compute-0 python3.9[223451]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:18:46 compute-0 sudo[223449]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:47 compute-0 sudo[223603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siptepnmlxguzzxrsceqslitsknuutpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799527.1489644-489-6303164739352/AnsiballZ_file.py'
Nov 22 08:18:47 compute-0 sudo[223603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:47 compute-0 python3.9[223605]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:47 compute-0 sudo[223603]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:48 compute-0 sudo[223754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzksethhgtaigubrawaiioazlgbxazr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799527.6294692-489-237342900964027/AnsiballZ_copy.py'
Nov 22 08:18:48 compute-0 sudo[223754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:48 compute-0 python3.9[223756]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763799527.6294692-489-237342900964027/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:18:48 compute-0 sudo[223754]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:49 compute-0 sudo[223830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxqobfjgnrbynyxaxdsxjfvzujenezei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799527.6294692-489-237342900964027/AnsiballZ_systemd.py'
Nov 22 08:18:49 compute-0 sudo[223830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:49 compute-0 python3.9[223832]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 08:18:49 compute-0 systemd[1]: Reloading.
Nov 22 08:18:49 compute-0 systemd-sysv-generator[223858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:18:49 compute-0 systemd-rc-local-generator[223855]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:18:50 compute-0 sudo[223830]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:50 compute-0 sudo[223940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawvzmslrtevuyelaxhhlkzlmanbbcnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799527.6294692-489-237342900964027/AnsiballZ_systemd.py'
Nov 22 08:18:50 compute-0 sudo[223940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:50 compute-0 python3.9[223942]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 08:18:50 compute-0 systemd[1]: Reloading.
Nov 22 08:18:50 compute-0 systemd-rc-local-generator[223964]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 08:18:50 compute-0 systemd-sysv-generator[223973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 08:18:51 compute-0 systemd[1]: Starting kepler container...
Nov 22 08:18:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:18:51 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.
Nov 22 08:18:51 compute-0 podman[223982]: 2025-11-22 08:18:51.222131532 +0000 UTC m=+0.151038756 container init 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.component=ubi9-container, release-0.7.12=, release=1214.1726694543, vendor=Red Hat, Inc.)
Nov 22 08:18:51 compute-0 kepler[223998]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 22 08:18:51 compute-0 podman[223982]: 2025-11-22 08:18:51.254622606 +0000 UTC m=+0.183529800 container start 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30)
Nov 22 08:18:51 compute-0 podman[223982]: kepler
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.260259       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.261146       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.261189       1 config.go:295] kernel version: 5.14
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.262091       1 power.go:78] Unable to obtain power, use estimate method
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.262115       1 redfish.go:169] failed to get redfish credential file path
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.262550       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.262564       1 power.go:79] using none to obtain power
Nov 22 08:18:51 compute-0 kepler[223998]: E1122 08:18:51.262581       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 22 08:18:51 compute-0 kepler[223998]: E1122 08:18:51.262610       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 22 08:18:51 compute-0 kepler[223998]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.264501       1 exporter.go:84] Number of CPUs: 8
Nov 22 08:18:51 compute-0 systemd[1]: Started kepler container.
Nov 22 08:18:51 compute-0 sudo[223940]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:51 compute-0 podman[224008]: 2025-11-22 08:18:51.339924981 +0000 UTC m=+0.073267834 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0)
Nov 22 08:18:51 compute-0 systemd[1]: 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93-165c66cb52c395d6.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:18:51 compute-0 systemd[1]: 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93-165c66cb52c395d6.service: Failed with result 'exit-code'.
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.828740       1 watcher.go:83] Using in cluster k8s config
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.828782       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 22 08:18:51 compute-0 kepler[223998]: E1122 08:18:51.828850       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.833272       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.833304       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.837122       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.837153       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.844306       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.844349       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.844366       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851509       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851543       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851549       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851552       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851557       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851567       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851630       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851652       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851669       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.851684       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.852011       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 22 08:18:51 compute-0 kepler[223998]: I1122 08:18:51.852298       1 exporter.go:208] Started Kepler in 592.311873ms
Nov 22 08:18:51 compute-0 sudo[224191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prsgkmjmquwjbqnhsumvebtajzpkcydp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799531.616142-513-101936490867090/AnsiballZ_systemd.py'
Nov 22 08:18:51 compute-0 sudo[224191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:52 compute-0 python3.9[224193]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:18:52 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 22 08:18:52 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:52.386 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 22 08:18:52 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:52.488 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 22 08:18:52 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:52.488 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 22 08:18:52 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:52.489 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 22 08:18:52 compute-0 ceilometer_agent_ipmi[222667]: 2025-11-22 08:18:52.498 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 22 08:18:52 compute-0 systemd[1]: libpod-c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.scope: Deactivated successfully.
Nov 22 08:18:52 compute-0 systemd[1]: libpod-c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.scope: Consumed 2.341s CPU time.
Nov 22 08:18:52 compute-0 podman[224197]: 2025-11-22 08:18:52.732174152 +0000 UTC m=+0.404543365 container died c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:18:52 compute-0 systemd[1]: c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-547b59a2f17bf799.timer: Deactivated successfully.
Nov 22 08:18:52 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.
Nov 22 08:18:52 compute-0 podman[224211]: 2025-11-22 08:18:52.909246137 +0000 UTC m=+0.154323120 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 22 08:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-userdata-shm.mount: Deactivated successfully.
Nov 22 08:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0-merged.mount: Deactivated successfully.
Nov 22 08:18:52 compute-0 podman[224197]: 2025-11-22 08:18:52.979262499 +0000 UTC m=+0.651631722 container cleanup c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:18:52 compute-0 podman[224197]: ceilometer_agent_ipmi
Nov 22 08:18:53 compute-0 podman[224243]: ceilometer_agent_ipmi
Nov 22 08:18:53 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 22 08:18:53 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 22 08:18:53 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 22 08:18:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c59d9c513bca6ce8ec067adfa0a21d173348ae271a6de1698dd709fd963f0/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 22 08:18:53 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.
Nov 22 08:18:53 compute-0 podman[224255]: 2025-11-22 08:18:53.451205899 +0000 UTC m=+0.347665847 container init c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm)
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + sudo -E kolla_set_configs
Nov 22 08:18:53 compute-0 podman[224255]: 2025-11-22 08:18:53.485675629 +0000 UTC m=+0.382135547 container start c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Nov 22 08:18:53 compute-0 sudo[224277]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 08:18:53 compute-0 sudo[224277]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:18:53 compute-0 sudo[224277]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:18:53 compute-0 podman[224255]: ceilometer_agent_ipmi
Nov 22 08:18:53 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Validating config file
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Copying service configuration files
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: INFO:__main__:Writing out command to execute
Nov 22 08:18:53 compute-0 sudo[224277]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: ++ cat /run_command
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + ARGS=
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + sudo kolla_copy_cacerts
Nov 22 08:18:53 compute-0 sudo[224291]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 08:18:53 compute-0 sudo[224291]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:18:53 compute-0 sudo[224291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:18:53 compute-0 sudo[224191]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:53 compute-0 sudo[224291]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + [[ ! -n '' ]]
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + . kolla_extend_start
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + umask 0022
Nov 22 08:18:53 compute-0 ceilometer_agent_ipmi[224271]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 22 08:18:53 compute-0 podman[224278]: 2025-11-22 08:18:53.604643431 +0000 UTC m=+0.108673330 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:18:53 compute-0 systemd[1]: c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-7986a15f3e7f07ef.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:18:53 compute-0 systemd[1]: c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-7986a15f3e7f07ef.service: Failed with result 'exit-code'.
Nov 22 08:18:54 compute-0 sudo[224453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjpzqxupgdbkidhmkkwstvtluycrescj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799533.773176-521-208778519762684/AnsiballZ_systemd.py'
Nov 22 08:18:54 compute-0 sudo[224453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:54 compute-0 python3.9[224455]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.554 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.555 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.555 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.555 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.555 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.555 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.555 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.555 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.556 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.557 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.557 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.557 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.557 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.557 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.557 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.557 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.558 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.558 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.558 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.558 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.558 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.558 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.558 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.559 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.560 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.561 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.561 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.561 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.561 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.561 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.561 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.561 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.562 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.563 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.564 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.564 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.564 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.564 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.564 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.564 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.564 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.565 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.566 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.567 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.567 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.567 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.567 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.567 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.567 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.567 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 systemd[1]: Stopping kepler container...
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.568 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.569 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.570 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.570 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.570 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.570 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.570 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.570 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.570 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.571 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.571 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.571 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.571 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.571 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.571 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.571 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.572 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.573 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.574 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.575 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.575 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.575 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.575 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.575 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.575 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.575 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.576 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.576 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.576 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.576 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.576 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.595 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.596 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.597 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 22 08:18:54 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:54.609 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpk_1q2hum/privsep.sock']
Nov 22 08:18:54 compute-0 sudo[224476]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpk_1q2hum/privsep.sock
Nov 22 08:18:54 compute-0 sudo[224476]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 08:18:54 compute-0 sudo[224476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 22 08:18:54 compute-0 kepler[223998]: I1122 08:18:54.727648       1 exporter.go:218] Received shutdown signal
Nov 22 08:18:54 compute-0 kepler[223998]: I1122 08:18:54.728855       1 exporter.go:226] Exiting...
Nov 22 08:18:54 compute-0 systemd[1]: libpod-03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.scope: Deactivated successfully.
Nov 22 08:18:54 compute-0 podman[224459]: 2025-11-22 08:18:54.938695147 +0000 UTC m=+0.356753266 container died 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release-0.7.12=)
Nov 22 08:18:55 compute-0 systemd[1]: 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93-165c66cb52c395d6.timer: Deactivated successfully.
Nov 22 08:18:55 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.
Nov 22 08:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93-userdata-shm.mount: Deactivated successfully.
Nov 22 08:18:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a74b6a26499c55325981194534ce9d3ad20ef099178c972969c265f8beef2f-merged.mount: Deactivated successfully.
Nov 22 08:18:55 compute-0 sudo[224476]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.257 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.257 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpk_1q2hum/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.140 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.148 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.152 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.152 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 22 08:18:55 compute-0 podman[224459]: 2025-11-22 08:18:55.323974953 +0000 UTC m=+0.742033102 container cleanup 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, container_name=kepler, release-0.7.12=)
Nov 22 08:18:55 compute-0 podman[224459]: kepler
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.360 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.360 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.362 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.362 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.363 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.363 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.363 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.363 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.364 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.364 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.364 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.364 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.365 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.371 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.371 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.371 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.372 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.372 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.372 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.372 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.372 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.373 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.373 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.373 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.373 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.373 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.374 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.374 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.374 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.375 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.375 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.375 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.375 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.376 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.376 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.376 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.376 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.377 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.377 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.377 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.377 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.378 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.378 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.378 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.378 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.378 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.379 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.379 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.379 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.379 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.379 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.380 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.380 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 podman[224498]: kepler
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.380 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.380 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.381 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.381 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.381 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.381 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.381 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.382 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.382 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.382 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.382 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.383 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.383 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.383 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.383 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.383 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.384 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.384 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.384 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.384 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.384 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.385 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.385 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.385 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.385 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.385 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.386 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.386 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.386 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.386 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.387 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.387 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.387 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.387 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.387 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.387 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.388 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.388 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.388 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 systemd[1]: Stopped kepler container.
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.388 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.388 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.388 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.389 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.390 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.390 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.390 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.390 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.390 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.390 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.391 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.391 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.391 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.391 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.391 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.391 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.391 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.392 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.392 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.392 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.392 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.392 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.392 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.392 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.393 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.393 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.393 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.393 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.393 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.393 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.393 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.394 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.394 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.394 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.394 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.394 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.395 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.395 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.395 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.395 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.395 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.395 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.396 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.396 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.396 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.396 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.396 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.396 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.396 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.397 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.397 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.397 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.397 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.397 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.397 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.398 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.398 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.398 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.398 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.398 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.398 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.399 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.400 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.400 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.400 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.400 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.400 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.401 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.402 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.403 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.404 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.405 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 systemd[1]: Starting kepler container...
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.406 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.406 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.406 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.406 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.406 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.406 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.406 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.407 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 22 08:18:55 compute-0 ceilometer_agent_ipmi[224271]: 2025-11-22 08:18:55.409 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 22 08:18:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:18:55 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.
Nov 22 08:18:55 compute-0 podman[224512]: 2025-11-22 08:18:55.580316403 +0000 UTC m=+0.163843140 container init 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, version=9.4, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 22 08:18:55 compute-0 podman[224512]: 2025-11-22 08:18:55.600098105 +0000 UTC m=+0.183624862 container start 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64)
Nov 22 08:18:55 compute-0 kepler[224528]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.620720       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.620902       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.620931       1 config.go:295] kernel version: 5.14
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.621655       1 power.go:78] Unable to obtain power, use estimate method
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.621680       1 redfish.go:169] failed to get redfish credential file path
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.622152       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.622168       1 power.go:79] using none to obtain power
Nov 22 08:18:55 compute-0 kepler[224528]: E1122 08:18:55.622185       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 22 08:18:55 compute-0 kepler[224528]: E1122 08:18:55.622207       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 22 08:18:55 compute-0 kepler[224528]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 22 08:18:55 compute-0 kepler[224528]: I1122 08:18:55.624353       1 exporter.go:84] Number of CPUs: 8
Nov 22 08:18:55 compute-0 podman[224512]: kepler
Nov 22 08:18:55 compute-0 systemd[1]: Started kepler container.
Nov 22 08:18:55 compute-0 sudo[224453]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:55 compute-0 podman[224533]: 2025-11-22 08:18:55.772177599 +0000 UTC m=+0.162870623 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Nov 22 08:18:55 compute-0 systemd[1]: 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93-72d96c28e2d3331a.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:18:55 compute-0 systemd[1]: 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93-72d96c28e2d3331a.service: Failed with result 'exit-code'.
Nov 22 08:18:55 compute-0 podman[224553]: 2025-11-22 08:18:55.840381757 +0000 UTC m=+0.089083773 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 08:18:56 compute-0 sudo[224735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxrptnlqhcpguubhutzxyttttlfcpfhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799535.8982105-529-108192953450399/AnsiballZ_find.py'
Nov 22 08:18:56 compute-0 sudo[224735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.171042       1 watcher.go:83] Using in cluster k8s config
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.171079       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 22 08:18:56 compute-0 kepler[224528]: E1122 08:18:56.171138       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.174516       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.174553       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.178074       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.178104       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.184415       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.184483       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.184500       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191399       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191605       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191624       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191628       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191633       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191641       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191735       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191768       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191785       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191800       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.191856       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 22 08:18:56 compute-0 kepler[224528]: I1122 08:18:56.192030       1 exporter.go:208] Started Kepler in 571.650766ms
Nov 22 08:18:56 compute-0 python3.9[224737]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 08:18:56 compute-0 sudo[224735]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:57 compute-0 sudo[224897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllficndhcnkpkcjlzqllgxdvcdoezno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799536.8267324-539-232358368977326/AnsiballZ_podman_container_info.py'
Nov 22 08:18:57 compute-0 sudo[224897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:57 compute-0 python3.9[224899]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 22 08:18:57 compute-0 sudo[224897]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:58 compute-0 sudo[225062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bderfmpdplywgdwbwfslxjizbxryanjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799537.9398937-547-273669649656994/AnsiballZ_podman_container_exec.py'
Nov 22 08:18:58 compute-0 sudo[225062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:58 compute-0 python3.9[225064]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:18:58 compute-0 systemd[1]: Started libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope.
Nov 22 08:18:58 compute-0 podman[225065]: 2025-11-22 08:18:58.802045329 +0000 UTC m=+0.102061511 container exec 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:18:58 compute-0 podman[225065]: 2025-11-22 08:18:58.836410215 +0000 UTC m=+0.136426397 container exec_died 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:18:58 compute-0 systemd[1]: libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope: Deactivated successfully.
Nov 22 08:18:58 compute-0 sudo[225062]: pam_unix(sudo:session): session closed for user root
Nov 22 08:18:59 compute-0 sudo[225243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znvoavatkbzzwqjodrmsyvhwfafkpilp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799539.0794442-555-130689144431193/AnsiballZ_podman_container_exec.py'
Nov 22 08:18:59 compute-0 sudo[225243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:18:59 compute-0 python3.9[225245]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:18:59 compute-0 podman[203476]: time="2025-11-22T08:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:18:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 22 08:18:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4271 "" "Go-http-client/1.1"
Nov 22 08:18:59 compute-0 systemd[1]: Started libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope.
Nov 22 08:18:59 compute-0 podman[225246]: 2025-11-22 08:18:59.896536961 +0000 UTC m=+0.152709271 container exec 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 22 08:19:00 compute-0 podman[225267]: 2025-11-22 08:19:00.004656426 +0000 UTC m=+0.091928014 container exec_died 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 08:19:00 compute-0 podman[225246]: 2025-11-22 08:19:00.064084725 +0000 UTC m=+0.320257045 container exec_died 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Nov 22 08:19:00 compute-0 systemd[1]: libpod-conmon-3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d.scope: Deactivated successfully.
Nov 22 08:19:00 compute-0 sudo[225243]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:01 compute-0 podman[225278]: 2025-11-22 08:19:01.14877727 +0000 UTC m=+0.099928003 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 08:19:01 compute-0 openstack_network_exporter[205661]: ERROR   08:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:19:01 compute-0 openstack_network_exporter[205661]: ERROR   08:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:19:01 compute-0 openstack_network_exporter[205661]: ERROR   08:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:19:01 compute-0 openstack_network_exporter[205661]: ERROR   08:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:19:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:19:01 compute-0 openstack_network_exporter[205661]: ERROR   08:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:19:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:19:02 compute-0 sudo[225447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdqczavaduubikkxmwyfwnecepnhiswa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799541.8434057-563-191352195510350/AnsiballZ_file.py'
Nov 22 08:19:02 compute-0 sudo[225447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:02 compute-0 python3.9[225449]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:02 compute-0 sudo[225447]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:03 compute-0 sudo[225599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xinlbankuxkckysiuqlzlhkcpnyqagdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799542.698496-572-102024692222541/AnsiballZ_podman_container_info.py'
Nov 22 08:19:03 compute-0 sudo[225599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:03 compute-0 python3.9[225601]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 22 08:19:04 compute-0 sudo[225599]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:04 compute-0 podman[225615]: 2025-11-22 08:19:04.121997426 +0000 UTC m=+0.076800314 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:19:04 compute-0 sudo[225788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpzsybphuqdbttjzdpdzxgbfynvpklwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799544.217629-580-260560126677776/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:04 compute-0 sudo[225788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:04 compute-0 python3.9[225790]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:04 compute-0 systemd[1]: Started libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope.
Nov 22 08:19:04 compute-0 podman[225791]: 2025-11-22 08:19:04.90451317 +0000 UTC m=+0.125454169 container exec b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:19:04 compute-0 podman[225791]: 2025-11-22 08:19:04.944303343 +0000 UTC m=+0.165244332 container exec_died b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:19:05 compute-0 sudo[225788]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:05 compute-0 systemd[1]: libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope: Deactivated successfully.
Nov 22 08:19:05 compute-0 sudo[225974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbwmoqcwuqlbjfyhacufrbwrsmtarkaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799545.4119115-588-25575579609789/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:05 compute-0 sudo[225974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:06 compute-0 python3.9[225976]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:06 compute-0 systemd[1]: Started libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope.
Nov 22 08:19:06 compute-0 podman[225977]: 2025-11-22 08:19:06.42410039 +0000 UTC m=+0.266945791 container exec b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:19:06 compute-0 podman[225977]: 2025-11-22 08:19:06.697317061 +0000 UTC m=+0.540162442 container exec_died b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 08:19:07 compute-0 sudo[225974]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:07 compute-0 systemd[1]: libpod-conmon-b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4.scope: Deactivated successfully.
Nov 22 08:19:07 compute-0 sudo[226156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhcirixpkvjhearxhjnycovqqwirbqkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799547.6362588-596-242988909833857/AnsiballZ_file.py'
Nov 22 08:19:07 compute-0 sudo[226156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:08 compute-0 python3.9[226158]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:08 compute-0 sudo[226156]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:09 compute-0 sudo[226308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlctrbetgqnbptuitqewnmfunomvsapz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799549.1076562-605-19989876308284/AnsiballZ_podman_container_info.py'
Nov 22 08:19:09 compute-0 sudo[226308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:09 compute-0 python3.9[226310]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 22 08:19:09 compute-0 sudo[226308]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:19:09.948 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:19:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:19:09.949 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:19:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:19:09.949 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:19:10 compute-0 sudo[226471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdmagxpliqyyskpxihqcwuhtueueucp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799550.0738742-613-276518369209783/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:10 compute-0 sudo[226471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:10 compute-0 python3.9[226473]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:10 compute-0 systemd[1]: Started libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope.
Nov 22 08:19:10 compute-0 podman[226474]: 2025-11-22 08:19:10.824737875 +0000 UTC m=+0.082583720 container exec 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:19:10 compute-0 podman[226474]: 2025-11-22 08:19:10.856568402 +0000 UTC m=+0.114414247 container exec_died 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 08:19:10 compute-0 systemd[1]: libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope: Deactivated successfully.
Nov 22 08:19:10 compute-0 sudo[226471]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:11 compute-0 sudo[226651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whdujdoinfokqfyyhunvgonrdforoalt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799551.0997376-621-159186961941699/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:11 compute-0 sudo[226651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:11 compute-0 python3.9[226653]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:11 compute-0 systemd[1]: Started libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope.
Nov 22 08:19:11 compute-0 podman[226654]: 2025-11-22 08:19:11.741336292 +0000 UTC m=+0.080740511 container exec 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 22 08:19:11 compute-0 podman[226654]: 2025-11-22 08:19:11.777876957 +0000 UTC m=+0.117281156 container exec_died 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:19:11 compute-0 systemd[1]: libpod-conmon-02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878.scope: Deactivated successfully.
Nov 22 08:19:11 compute-0 sudo[226651]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:12 compute-0 sudo[226832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkulclavxzmggzdxghmxfzluofhyvjos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799552.0170662-629-268230284602898/AnsiballZ_file.py'
Nov 22 08:19:12 compute-0 sudo[226832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:12 compute-0 python3.9[226834]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:12 compute-0 sudo[226832]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:13 compute-0 sudo[226984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euqlgntjsibyjnoxzdthvcghibrwbpxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799552.8200326-638-256947552366424/AnsiballZ_podman_container_info.py'
Nov 22 08:19:13 compute-0 sudo[226984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:14 compute-0 python3.9[226986]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 22 08:19:14 compute-0 sudo[226984]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:14 compute-0 sudo[227149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njqhxdfmdiedegsymoxbaaydbgdqxeee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799554.3370423-646-280553417934624/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:14 compute-0 sudo[227149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:14 compute-0 python3.9[227151]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:14 compute-0 systemd[1]: Started libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope.
Nov 22 08:19:15 compute-0 podman[227152]: 2025-11-22 08:19:15.027147051 +0000 UTC m=+0.107617902 container exec c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 22 08:19:15 compute-0 podman[227152]: 2025-11-22 08:19:15.039166258 +0000 UTC m=+0.119637089 container exec_died c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 22 08:19:15 compute-0 nova_compute[189268]: 2025-11-22 08:19:15.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:15 compute-0 sudo[227149]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:15 compute-0 systemd[1]: libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope: Deactivated successfully.
Nov 22 08:19:16 compute-0 sudo[227380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfntvttikudttkfrbwelqqgujassblt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799555.8601797-654-126092285104921/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:16 compute-0 sudo[227380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:16 compute-0 podman[227307]: 2025-11-22 08:19:16.249519226 +0000 UTC m=+0.093443067 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 08:19:16 compute-0 podman[227308]: 2025-11-22 08:19:16.266110948 +0000 UTC m=+0.111231471 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:19:16 compute-0 podman[227310]: 2025-11-22 08:19:16.2662057 +0000 UTC m=+0.103674064 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:19:16 compute-0 python3.9[227394]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:16 compute-0 systemd[1]: Started libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope.
Nov 22 08:19:16 compute-0 podman[227396]: 2025-11-22 08:19:16.572622116 +0000 UTC m=+0.098994477 container exec c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 08:19:16 compute-0 podman[227396]: 2025-11-22 08:19:16.606378536 +0000 UTC m=+0.132750867 container exec_died c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:19:16 compute-0 systemd[1]: libpod-conmon-c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d.scope: Deactivated successfully.
Nov 22 08:19:16 compute-0 sudo[227380]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:17 compute-0 sudo[227576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqdovtdhlusqrekkayhlemsyxkugumdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799556.87484-662-28805329330532/AnsiballZ_file.py'
Nov 22 08:19:17 compute-0 sudo[227576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:17 compute-0 python3.9[227578]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:17 compute-0 sudo[227576]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.130 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.131 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.131 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:19:18 compute-0 sudo[227728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deytqucwrkbtqklxkkfbcebdktfdbpnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799557.8946733-671-98304186862448/AnsiballZ_podman_container_info.py'
Nov 22 08:19:18 compute-0 sudo[227728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.492 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.493 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5750MB free_disk=72.56032180786133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.493 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.494 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:19:18 compute-0 python3.9[227730]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.568 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.569 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.647 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:19:18 compute-0 sudo[227728]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.660 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.662 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:19:18 compute-0 nova_compute[189268]: 2025-11-22 08:19:18.663 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:19:19 compute-0 sudo[227893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpjmciitkzibofwflxvxmhzoyiodzyzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799558.8776262-679-45280417203660/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:19 compute-0 sudo[227893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:19 compute-0 python3.9[227895]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:19 compute-0 systemd[1]: Started libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope.
Nov 22 08:19:19 compute-0 podman[227896]: 2025-11-22 08:19:19.592605705 +0000 UTC m=+0.105666479 container exec 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:19:19 compute-0 podman[227896]: 2025-11-22 08:19:19.625653866 +0000 UTC m=+0.138714630 container exec_died 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:19:19 compute-0 nova_compute[189268]: 2025-11-22 08:19:19.659 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:19 compute-0 systemd[1]: libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope: Deactivated successfully.
Nov 22 08:19:19 compute-0 sudo[227893]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:19 compute-0 nova_compute[189268]: 2025-11-22 08:19:19.676 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:19 compute-0 nova_compute[189268]: 2025-11-22 08:19:19.677 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:19:19 compute-0 nova_compute[189268]: 2025-11-22 08:19:19.677 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:19:19 compute-0 nova_compute[189268]: 2025-11-22 08:19:19.687 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:19:19 compute-0 nova_compute[189268]: 2025-11-22 08:19:19.688 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:19:20 compute-0 sudo[228076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllimktpqauhtptxqhgknencvliyefco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799559.9581964-687-6194379027366/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:20 compute-0 sudo[228076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:20 compute-0 python3.9[228078]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:20 compute-0 systemd[1]: Started libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope.
Nov 22 08:19:20 compute-0 podman[228079]: 2025-11-22 08:19:20.783903074 +0000 UTC m=+0.111943741 container exec 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:19:20 compute-0 podman[228098]: 2025-11-22 08:19:20.853689325 +0000 UTC m=+0.057365934 container exec_died 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:19:20 compute-0 podman[228079]: 2025-11-22 08:19:20.879997131 +0000 UTC m=+0.208037778 container exec_died 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:19:20 compute-0 systemd[1]: libpod-conmon-213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001.scope: Deactivated successfully.
Nov 22 08:19:20 compute-0 sudo[228076]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:21 compute-0 sudo[228260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwrzocidspyxlskaswhmqtuufpalpgnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799561.1990383-695-235905944040908/AnsiballZ_file.py'
Nov 22 08:19:21 compute-0 sudo[228260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:21 compute-0 python3.9[228262]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:21 compute-0 sudo[228260]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:22 compute-0 sudo[228412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxifegezarygsidikxtwlckrdvucslx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799562.0696507-704-263382877343128/AnsiballZ_podman_container_info.py'
Nov 22 08:19:22 compute-0 sudo[228412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:22 compute-0 python3.9[228414]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 22 08:19:22 compute-0 sudo[228412]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:23 compute-0 podman[228502]: 2025-11-22 08:19:23.137339347 +0000 UTC m=+0.083413593 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 22 08:19:23 compute-0 sudo[228594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbpfobowfwgmcyfpvurtevkbbadpldt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799562.9245927-712-11160321528650/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:23 compute-0 sudo[228594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:23 compute-0 python3.9[228596]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:23 compute-0 systemd[1]: Started libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope.
Nov 22 08:19:23 compute-0 podman[228597]: 2025-11-22 08:19:23.869408637 +0000 UTC m=+0.140813876 container exec 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:19:23 compute-0 podman[228597]: 2025-11-22 08:19:23.918389172 +0000 UTC m=+0.189794381 container exec_died 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:19:23 compute-0 systemd[1]: libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope: Deactivated successfully.
Nov 22 08:19:23 compute-0 podman[228610]: 2025-11-22 08:19:23.963025427 +0000 UTC m=+0.097063525 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 08:19:23 compute-0 systemd[1]: c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-7986a15f3e7f07ef.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 08:19:23 compute-0 systemd[1]: c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd-7986a15f3e7f07ef.service: Failed with result 'exit-code'.
Nov 22 08:19:23 compute-0 sudo[228594]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:24 compute-0 sudo[228790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taeiswlbueqfypsvgjcevvazpwmovfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799564.1508906-720-262855401942007/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:24 compute-0 sudo[228790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:24 compute-0 python3.9[228792]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:24 compute-0 systemd[1]: Started libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope.
Nov 22 08:19:24 compute-0 podman[228793]: 2025-11-22 08:19:24.832996564 +0000 UTC m=+0.093892279 container exec 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:19:24 compute-0 podman[228793]: 2025-11-22 08:19:24.86518552 +0000 UTC m=+0.126081215 container exec_died 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:19:24 compute-0 systemd[1]: libpod-conmon-2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30.scope: Deactivated successfully.
Nov 22 08:19:24 compute-0 sudo[228790]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:25 compute-0 sudo[228972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qutffwjotnavlqulgzuwhyuvufgmjjaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799565.241001-728-274853374603964/AnsiballZ_file.py'
Nov 22 08:19:25 compute-0 sudo[228972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:25 compute-0 python3.9[228974]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:25 compute-0 sudo[228972]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:26 compute-0 podman[229022]: 2025-11-22 08:19:26.142021329 +0000 UTC m=+0.093172558 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Nov 22 08:19:26 compute-0 podman[229023]: 2025-11-22 08:19:26.190195931 +0000 UTC m=+0.141382272 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:19:26 compute-0 sudo[229170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrndnjkrerjiwdyykdamfpedwxvknxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799566.0314043-737-239470496654087/AnsiballZ_podman_container_info.py'
Nov 22 08:19:26 compute-0 sudo[229170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:26 compute-0 python3.9[229172]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 22 08:19:26 compute-0 sudo[229170]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:28 compute-0 sudo[229334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjybnsckdzekqhgzujagggpkcwwitajy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799567.6524312-745-194206654517845/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:28 compute-0 sudo[229334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:28 compute-0 python3.9[229336]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:28 compute-0 systemd[1]: Started libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope.
Nov 22 08:19:28 compute-0 podman[229337]: 2025-11-22 08:19:28.382561057 +0000 UTC m=+0.102705509 container exec 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Nov 22 08:19:28 compute-0 podman[229337]: 2025-11-22 08:19:28.415428553 +0000 UTC m=+0.135572995 container exec_died 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Nov 22 08:19:28 compute-0 systemd[1]: libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope: Deactivated successfully.
Nov 22 08:19:28 compute-0 sudo[229334]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:29 compute-0 sudo[229517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zimmznbcpwrmpyusqadwczilopnbqolo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799568.711611-753-70542563857669/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:29 compute-0 sudo[229517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:29 compute-0 python3.9[229519]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:29 compute-0 systemd[1]: Started libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope.
Nov 22 08:19:29 compute-0 podman[229520]: 2025-11-22 08:19:29.476543105 +0000 UTC m=+0.174951216 container exec 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal)
Nov 22 08:19:29 compute-0 podman[229520]: 2025-11-22 08:19:29.509523463 +0000 UTC m=+0.207931554 container exec_died 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 08:19:29 compute-0 systemd[1]: libpod-conmon-0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4.scope: Deactivated successfully.
Nov 22 08:19:29 compute-0 sudo[229517]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:29 compute-0 podman[203476]: time="2025-11-22T08:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:19:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 22 08:19:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
Nov 22 08:19:30 compute-0 sudo[229699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mniwgpaxegfmecrovicnxnvhjbobqfxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799569.8722606-761-167119936623533/AnsiballZ_file.py'
Nov 22 08:19:30 compute-0 sudo[229699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:30 compute-0 python3.9[229701]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:30 compute-0 sudo[229699]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:31 compute-0 sudo[229851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqmevmhzjdqpxdmgxxrtsaehvnqzfckr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799570.7114801-770-93467279603515/AnsiballZ_podman_container_info.py'
Nov 22 08:19:31 compute-0 sudo[229851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:31 compute-0 python3.9[229853]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 22 08:19:31 compute-0 sudo[229851]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:31 compute-0 openstack_network_exporter[205661]: ERROR   08:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:19:31 compute-0 openstack_network_exporter[205661]: ERROR   08:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:19:31 compute-0 openstack_network_exporter[205661]: ERROR   08:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:19:31 compute-0 openstack_network_exporter[205661]: ERROR   08:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:19:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:19:31 compute-0 openstack_network_exporter[205661]: ERROR   08:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:19:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:19:31 compute-0 sudo[230028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffnknoxnaqyedihqrkksciymqssgqwgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799571.516463-778-82184642793457/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:31 compute-0 sudo[230028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:31 compute-0 podman[229990]: 2025-11-22 08:19:31.876216347 +0000 UTC m=+0.072356681 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 22 08:19:32 compute-0 python3.9[230036]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:32 compute-0 systemd[1]: Started libpod-conmon-c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.scope.
Nov 22 08:19:32 compute-0 podman[230038]: 2025-11-22 08:19:32.297594044 +0000 UTC m=+0.192784341 container exec c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:19:32 compute-0 podman[230038]: 2025-11-22 08:19:32.348075399 +0000 UTC m=+0.243265716 container exec_died c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 08:19:32 compute-0 sudo[230028]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:32 compute-0 systemd[1]: libpod-conmon-c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.scope: Deactivated successfully.
Nov 22 08:19:32 compute-0 sudo[230214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbvimksemktiyryaqmvyjcguflspifr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799572.6577883-786-280130578915255/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:32 compute-0 sudo[230214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:33 compute-0 python3.9[230216]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:33 compute-0 systemd[1]: Started libpod-conmon-c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.scope.
Nov 22 08:19:33 compute-0 podman[230217]: 2025-11-22 08:19:33.509548296 +0000 UTC m=+0.268632638 container exec c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 08:19:33 compute-0 podman[230235]: 2025-11-22 08:19:33.586355918 +0000 UTC m=+0.060578821 container exec_died c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:19:33 compute-0 podman[230217]: 2025-11-22 08:19:33.668164616 +0000 UTC m=+0.427248938 container exec_died c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:19:33 compute-0 systemd[1]: libpod-conmon-c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd.scope: Deactivated successfully.
Nov 22 08:19:33 compute-0 sudo[230214]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:34 compute-0 sudo[230411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-falpovqiqwthuxkhawrdmadexmkiweio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799573.951332-794-240154699360809/AnsiballZ_file.py'
Nov 22 08:19:34 compute-0 sudo[230411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:34 compute-0 podman[230370]: 2025-11-22 08:19:34.33867567 +0000 UTC m=+0.103728916 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:19:34 compute-0 python3.9[230420]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:34 compute-0 sudo[230411]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:35 compute-0 sudo[230571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgnlltnzyojgaalhfxhjlztpkcnnnhud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799574.7774134-803-274988872115061/AnsiballZ_podman_container_info.py'
Nov 22 08:19:35 compute-0 sudo[230571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:35 compute-0 python3.9[230573]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 22 08:19:35 compute-0 sudo[230571]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:36 compute-0 sudo[230736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiipxmlifojcimmvcxiautjfqknlwnit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799575.6629267-811-220500137927359/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:36 compute-0 sudo[230736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:36 compute-0 python3.9[230738]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:36 compute-0 systemd[1]: Started libpod-conmon-03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.scope.
Nov 22 08:19:36 compute-0 podman[230739]: 2025-11-22 08:19:36.519257475 +0000 UTC m=+0.260166257 container exec 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 22 08:19:36 compute-0 podman[230758]: 2025-11-22 08:19:36.600322513 +0000 UTC m=+0.064778236 container exec_died 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-container, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 22 08:19:36 compute-0 podman[230739]: 2025-11-22 08:19:36.63953072 +0000 UTC m=+0.380439442 container exec_died 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, architecture=x86_64, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.tags=base rhel9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:19:36 compute-0 systemd[1]: libpod-conmon-03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.scope: Deactivated successfully.
Nov 22 08:19:36 compute-0 sudo[230736]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:37 compute-0 sudo[230918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcwhdyybmxkthrggdyljqsjbjtnaqrti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799577.0698822-819-93003512297839/AnsiballZ_podman_container_exec.py'
Nov 22 08:19:37 compute-0 sudo[230918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:37 compute-0 python3.9[230920]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 22 08:19:37 compute-0 systemd[1]: Started libpod-conmon-03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.scope.
Nov 22 08:19:37 compute-0 podman[230921]: 2025-11-22 08:19:37.816752266 +0000 UTC m=+0.135697927 container exec 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container)
Nov 22 08:19:37 compute-0 podman[230939]: 2025-11-22 08:19:37.898574145 +0000 UTC m=+0.063347127 container exec_died 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release-0.7.12=, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Nov 22 08:19:37 compute-0 podman[230921]: 2025-11-22 08:19:37.960729068 +0000 UTC m=+0.279674729 container exec_died 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, release=1214.1726694543)
Nov 22 08:19:37 compute-0 systemd[1]: libpod-conmon-03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93.scope: Deactivated successfully.
Nov 22 08:19:38 compute-0 sudo[230918]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:39 compute-0 sudo[231101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txicknkbbpknwunkatiqknlmytrcewxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799579.1658237-827-256528713872874/AnsiballZ_file.py'
Nov 22 08:19:39 compute-0 sudo[231101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:39 compute-0 python3.9[231103]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:39 compute-0 sudo[231101]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:40 compute-0 sudo[231253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnxunktjbtlyovntvtjjegqilmtupjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799580.0015934-836-166770778697509/AnsiballZ_file.py'
Nov 22 08:19:40 compute-0 sudo[231253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:40 compute-0 python3.9[231255]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:40 compute-0 sudo[231253]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:41 compute-0 sudo[231405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpmuomvfhzraurpbsksxoxaouskicuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799580.7942772-844-249303013851146/AnsiballZ_stat.py'
Nov 22 08:19:41 compute-0 sudo[231405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:41 compute-0 python3.9[231407]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:41 compute-0 sudo[231405]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:41 compute-0 sudo[231528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkzidlucqjbiyovxrbsxxsvekhskcpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799580.7942772-844-249303013851146/AnsiballZ_copy.py'
Nov 22 08:19:41 compute-0 sudo[231528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:42 compute-0 python3.9[231530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799580.7942772-844-249303013851146/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:42 compute-0 sudo[231528]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:42 compute-0 sudo[231680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opqnsmbqjetrkhpjhrpnytvpafpzlqpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799582.3704278-860-33417567785167/AnsiballZ_file.py'
Nov 22 08:19:42 compute-0 sudo[231680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:42 compute-0 python3.9[231682]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:42 compute-0 sudo[231680]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:43 compute-0 sudo[231832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbeytbxvsshnwsggbpdcbnpffybjruzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799583.16144-868-7901397806855/AnsiballZ_stat.py'
Nov 22 08:19:43 compute-0 sudo[231832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:43 compute-0 python3.9[231834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:43 compute-0 sudo[231832]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:44 compute-0 sudo[231910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbibsnpnmzjzxmwuikwumvbbqpyfggmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799583.16144-868-7901397806855/AnsiballZ_file.py'
Nov 22 08:19:44 compute-0 sudo[231910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:44 compute-0 python3.9[231912]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:44 compute-0 sudo[231910]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:44 compute-0 sudo[232062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paswqpfwvhhefkxyvwoygqaucjdgjekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799584.5623848-880-102574739784169/AnsiballZ_stat.py'
Nov 22 08:19:44 compute-0 sudo[232062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:45 compute-0 python3.9[232064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:45 compute-0 sudo[232062]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:45 compute-0 sudo[232140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdgwiujvqkvafcwqvcrkfixxlcrlsgnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799584.5623848-880-102574739784169/AnsiballZ_file.py'
Nov 22 08:19:45 compute-0 sudo[232140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:45 compute-0 python3.9[232142]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hujybj7w recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:45 compute-0 sudo[232140]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:46 compute-0 sudo[232292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmvpwxhzdnulyiogazhevrnekoxqlij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799585.8871427-892-33271763564198/AnsiballZ_stat.py'
Nov 22 08:19:46 compute-0 sudo[232292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:46 compute-0 podman[232294]: 2025-11-22 08:19:46.375607494 +0000 UTC m=+0.078592422 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:19:46 compute-0 podman[232296]: 2025-11-22 08:19:46.38096484 +0000 UTC m=+0.070309226 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:19:46 compute-0 podman[232301]: 2025-11-22 08:19:46.403244957 +0000 UTC m=+0.094164266 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:19:46 compute-0 python3.9[232295]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:46 compute-0 sudo[232292]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:46 compute-0 sudo[232427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyjizaloclumaxrzjhzilguyliaqybhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799585.8871427-892-33271763564198/AnsiballZ_file.py'
Nov 22 08:19:46 compute-0 sudo[232427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:47 compute-0 python3.9[232429]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:47 compute-0 sudo[232427]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:48 compute-0 sudo[232579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wexgeipcczcntdbtacjgokelmquzbggu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799587.2814415-905-97023067267004/AnsiballZ_command.py'
Nov 22 08:19:48 compute-0 sudo[232579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:48 compute-0 python3.9[232581]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:19:48 compute-0 sudo[232579]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:49 compute-0 sudo[232732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miayvrokmzndwfaecxvjrhtihwzdknog ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799588.6990905-913-239769080220391/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 08:19:49 compute-0 sudo[232732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:49 compute-0 python3[232734]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 08:19:49 compute-0 sudo[232732]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:50 compute-0 sudo[232885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsuckheymjgwtjnqudyvzljcfjuperus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799590.2252502-921-229057407444065/AnsiballZ_stat.py'
Nov 22 08:19:50 compute-0 sudo[232885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:50 compute-0 python3.9[232887]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:50 compute-0 sudo[232885]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:51 compute-0 sudo[232963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfwqwmcclkdqrqeeqsjpsalztwhioygr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799590.2252502-921-229057407444065/AnsiballZ_file.py'
Nov 22 08:19:51 compute-0 sudo[232963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:51 compute-0 python3.9[232965]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:51 compute-0 sudo[232963]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:52 compute-0 sudo[233115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brwquuhyabcrvijndcgfeblhydzssopr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799591.8379722-933-262274051950953/AnsiballZ_stat.py'
Nov 22 08:19:52 compute-0 sudo[233115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:52 compute-0 python3.9[233117]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:52 compute-0 sudo[233115]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:52 compute-0 sudo[233193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okoxsveweonnndvqrqjsntqsewiqifzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799591.8379722-933-262274051950953/AnsiballZ_file.py'
Nov 22 08:19:52 compute-0 sudo[233193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:53 compute-0 python3.9[233195]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:53 compute-0 sudo[233193]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:53 compute-0 sudo[233359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtxgwqkhofqznqdjbyzgnilxqjnzbmit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799593.4241157-945-95511859632858/AnsiballZ_stat.py'
Nov 22 08:19:53 compute-0 sudo[233359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:53 compute-0 podman[233319]: 2025-11-22 08:19:53.839316733 +0000 UTC m=+0.090097586 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:19:54 compute-0 python3.9[233365]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:54 compute-0 sudo[233359]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:54 compute-0 podman[233368]: 2025-11-22 08:19:54.109122171 +0000 UTC m=+0.066999175 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:19:54 compute-0 sudo[233462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaaoafgatspjarriqxtgamkygltprkry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799593.4241157-945-95511859632858/AnsiballZ_file.py'
Nov 22 08:19:54 compute-0 sudo[233462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:54 compute-0 python3.9[233464]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:54 compute-0 sudo[233462]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:55 compute-0 sudo[233614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodartplwronsuldsoslomhyazlasdda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799594.8888469-957-127698349056176/AnsiballZ_stat.py'
Nov 22 08:19:55 compute-0 sudo[233614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:55 compute-0 python3.9[233616]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:55 compute-0 sudo[233614]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:55 compute-0 sudo[233692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcjaakxxgorwbjtfvfewysdedkkadzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799594.8888469-957-127698349056176/AnsiballZ_file.py'
Nov 22 08:19:55 compute-0 sudo[233692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:56 compute-0 python3.9[233694]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:56 compute-0 sudo[233692]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:56 compute-0 sudo[233879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-extpnwjlivreiznfeiusardglytebwyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799596.2199948-969-139706072739095/AnsiballZ_stat.py'
Nov 22 08:19:56 compute-0 sudo[233879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:56 compute-0 podman[233818]: 2025-11-22 08:19:56.864658138 +0000 UTC m=+0.108685742 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0)
Nov 22 08:19:56 compute-0 podman[233819]: 2025-11-22 08:19:56.888800155 +0000 UTC m=+0.125573881 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:19:57 compute-0 python3.9[233885]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:19:57 compute-0 sudo[233879]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:57 compute-0 sudo[234013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvkfptspctxggnuuafrducmrazuygbvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799596.2199948-969-139706072739095/AnsiballZ_copy.py'
Nov 22 08:19:57 compute-0 sudo[234013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:57 compute-0 python3.9[234015]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799596.2199948-969-139706072739095/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:57 compute-0 sudo[234013]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:58 compute-0 sudo[234165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyrfrngduouyqqsykexbdbfzigjfslek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799597.9669607-984-15022376926849/AnsiballZ_file.py'
Nov 22 08:19:58 compute-0 sudo[234165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:58 compute-0 python3.9[234167]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:19:58 compute-0 sudo[234165]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:59 compute-0 sudo[234317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htxmcatofvuzswnzcupofshwnxrnvkcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799598.7715254-992-122465351617303/AnsiballZ_command.py'
Nov 22 08:19:59 compute-0 sudo[234317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:19:59 compute-0 python3.9[234319]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:19:59 compute-0 sudo[234317]: pam_unix(sudo:session): session closed for user root
Nov 22 08:19:59 compute-0 podman[203476]: time="2025-11-22T08:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:19:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:19:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
Nov 22 08:20:00 compute-0 sudo[234472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firocfosjlrmgucjpusmvucyxpttezjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799600.0467207-1000-5695448468140/AnsiballZ_blockinfile.py'
Nov 22 08:20:00 compute-0 sudo[234472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:00 compute-0 python3.9[234474]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:20:00 compute-0 sudo[234472]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:01 compute-0 openstack_network_exporter[205661]: ERROR   08:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:20:01 compute-0 openstack_network_exporter[205661]: ERROR   08:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:20:01 compute-0 openstack_network_exporter[205661]: ERROR   08:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:20:01 compute-0 openstack_network_exporter[205661]: ERROR   08:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:20:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:20:01 compute-0 openstack_network_exporter[205661]: ERROR   08:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:20:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:20:02 compute-0 podman[234543]: 2025-11-22 08:20:02.109398491 +0000 UTC m=+0.072365178 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Nov 22 08:20:02 compute-0 sudo[234644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsbnsjdygrznbjvfyygdlfeweaeklpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799601.162616-1009-61851176186349/AnsiballZ_command.py'
Nov 22 08:20:02 compute-0 sudo[234644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:02 compute-0 python3.9[234646]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:20:02 compute-0 sudo[234644]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:03 compute-0 sudo[234797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywqpkdrlpwhmalzqwscqvekruuvymnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799602.7678616-1017-195328113094544/AnsiballZ_stat.py'
Nov 22 08:20:03 compute-0 sudo[234797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:03 compute-0 python3.9[234799]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 08:20:03 compute-0 sudo[234797]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:03 compute-0 sudo[234951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrarwzytptaxrqxnwzeziiuzgqlrqjbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799603.5046887-1025-27564730000158/AnsiballZ_command.py'
Nov 22 08:20:03 compute-0 sudo[234951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:04 compute-0 python3.9[234953]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:20:04 compute-0 sudo[234951]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:04 compute-0 sudo[235123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kosmhpfbucmksinbvhyypthwaurvbzng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799604.2525883-1033-25814169308979/AnsiballZ_file.py'
Nov 22 08:20:04 compute-0 podman[235080]: 2025-11-22 08:20:04.656368682 +0000 UTC m=+0.073151896 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:20:04 compute-0 sudo[235123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:04 compute-0 python3.9[235132]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:20:04 compute-0 sudo[235123]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:05 compute-0 sshd-session[214782]: Connection closed by 192.168.122.30 port 34404
Nov 22 08:20:05 compute-0 sshd-session[214779]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:20:05 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Nov 22 08:20:05 compute-0 systemd[1]: session-27.scope: Consumed 1min 27.714s CPU time.
Nov 22 08:20:05 compute-0 systemd-logind[826]: Session 27 logged out. Waiting for processes to exit.
Nov 22 08:20:05 compute-0 systemd-logind[826]: Removed session 27.
Nov 22 08:20:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:20:09.950 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:20:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:20:09.950 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:20:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:20:09.950 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:20:10 compute-0 sshd-session[235157]: Accepted publickey for zuul from 192.168.122.30 port 52960 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 08:20:10 compute-0 systemd-logind[826]: New session 28 of user zuul.
Nov 22 08:20:10 compute-0 systemd[1]: Started Session 28 of User zuul.
Nov 22 08:20:10 compute-0 sshd-session[235157]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:20:12 compute-0 python3.9[235310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:20:14 compute-0 sudo[235464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzvdfcbqqsvmagjvneprsmmrnksbudj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799613.0446098-34-180803262032476/AnsiballZ_systemd.py'
Nov 22 08:20:14 compute-0 sudo[235464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:14 compute-0 nova_compute[189268]: 2025-11-22 08:20:14.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:14 compute-0 nova_compute[189268]: 2025-11-22 08:20:14.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:20:14 compute-0 nova_compute[189268]: 2025-11-22 08:20:14.118 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:20:14 compute-0 nova_compute[189268]: 2025-11-22 08:20:14.120 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:14 compute-0 nova_compute[189268]: 2025-11-22 08:20:14.121 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:20:14 compute-0 nova_compute[189268]: 2025-11-22 08:20:14.138 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:14 compute-0 python3.9[235466]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 22 08:20:14 compute-0 sudo[235464]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:15 compute-0 sudo[235617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncfccbdeiwesagbrizxjsqnlmdcvyry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799614.6768215-42-35496085108600/AnsiballZ_setup.py'
Nov 22 08:20:15 compute-0 sudo[235617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:15 compute-0 python3.9[235619]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 08:20:15 compute-0 sudo[235617]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:16 compute-0 nova_compute[189268]: 2025-11-22 08:20:16.151 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:16 compute-0 sudo[235701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzdemljtssltzqevdqynwkmgietfnkvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799614.6768215-42-35496085108600/AnsiballZ_dnf.py'
Nov 22 08:20:16 compute-0 sudo[235701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:16 compute-0 python3.9[235703]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 08:20:17 compute-0 podman[235706]: 2025-11-22 08:20:17.117824828 +0000 UTC m=+0.071814252 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:20:17 compute-0 podman[235707]: 2025-11-22 08:20:17.142681125 +0000 UTC m=+0.093613598 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:20:17 compute-0 podman[235705]: 2025-11-22 08:20:17.146090335 +0000 UTC m=+0.102790601 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:20:19 compute-0 nova_compute[189268]: 2025-11-22 08:20:19.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:19 compute-0 nova_compute[189268]: 2025-11-22 08:20:19.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:19 compute-0 nova_compute[189268]: 2025-11-22 08:20:19.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:20:19 compute-0 nova_compute[189268]: 2025-11-22 08:20:19.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:20:19 compute-0 nova_compute[189268]: 2025-11-22 08:20:19.111 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:20:19 compute-0 nova_compute[189268]: 2025-11-22 08:20:19.111 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:19 compute-0 nova_compute[189268]: 2025-11-22 08:20:19.111 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:19 compute-0 sudo[235701]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:20 compute-0 sudo[235921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhbhktbrmlqqmbvwcdcxyhhobmxumcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799619.5380192-54-82292884441424/AnsiballZ_stat.py'
Nov 22 08:20:20 compute-0 sudo[235921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.126 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:20:20 compute-0 python3.9[235923]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:20:20 compute-0 sudo[235921]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.480 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.482 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5727MB free_disk=72.5595474243164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.482 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.483 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.622 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.624 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.703 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.796 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.796 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.814 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.834 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.857 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:20:20 compute-0 sudo[236044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wndbiophbfjzyzggnuwtddrtpapjavdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799619.5380192-54-82292884441424/AnsiballZ_copy.py'
Nov 22 08:20:20 compute-0 sudo[236044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.868 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.869 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:20:20 compute-0 nova_compute[189268]: 2025-11-22 08:20:20.870 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:20:21 compute-0 python3.9[236046]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799619.5380192-54-82292884441424/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:20:21 compute-0 sudo[236044]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:21 compute-0 nova_compute[189268]: 2025-11-22 08:20:21.868 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:20:21 compute-0 sudo[236196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbbttxpryeaxebshnhnlsuhstbbrxxpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799621.3856914-69-241852051382883/AnsiballZ_file.py'
Nov 22 08:20:21 compute-0 sudo[236196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.087 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.088 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.089 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.091 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.091 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.092 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.092 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.110 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.110 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.110 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.110 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.110 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.110 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.110 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.111 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.111 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.111 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.111 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.112 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.112 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.112 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.112 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.112 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:20:22.112 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:20:22 compute-0 python3.9[236198]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:20:22 compute-0 sudo[236196]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:23 compute-0 sudo[236349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mauaaslebboztdcktmidpqhgfbaqgqdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799623.2700226-77-95132723322518/AnsiballZ_stat.py'
Nov 22 08:20:23 compute-0 sudo[236349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:23 compute-0 python3.9[236351]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 08:20:23 compute-0 sudo[236349]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:24 compute-0 podman[236411]: 2025-11-22 08:20:24.121639155 +0000 UTC m=+0.071693788 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:20:24 compute-0 podman[236463]: 2025-11-22 08:20:24.208194546 +0000 UTC m=+0.067156808 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:20:24 compute-0 sudo[236509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwuhatapayipovdxusfaqbbbsxwejswz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799623.2700226-77-95132723322518/AnsiballZ_copy.py'
Nov 22 08:20:24 compute-0 sudo[236509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:24 compute-0 python3.9[236512]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763799623.2700226-77-95132723322518/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 08:20:24 compute-0 sudo[236509]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:25 compute-0 sudo[236662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwhvopxieltbtfnjwfvihaggohxuwqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763799624.6366327-92-76145396225361/AnsiballZ_systemd.py'
Nov 22 08:20:25 compute-0 sudo[236662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:20:25 compute-0 python3.9[236664]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 08:20:25 compute-0 systemd[1]: Stopping System Logging Service...
Nov 22 08:20:26 compute-0 rsyslogd[1013]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1013" x-info="https://www.rsyslog.com"] exiting on signal 15.
Nov 22 08:20:26 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Nov 22 08:20:26 compute-0 systemd[1]: Stopped System Logging Service.
Nov 22 08:20:26 compute-0 systemd[1]: rsyslog.service: Consumed 3.284s CPU time, 8.0M memory peak, read 0B from disk, written 6.2M to disk.
Nov 22 08:20:26 compute-0 systemd[1]: Starting System Logging Service...
Nov 22 08:20:26 compute-0 rsyslogd[236668]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="236668" x-info="https://www.rsyslog.com"] start
Nov 22 08:20:26 compute-0 systemd[1]: Started System Logging Service.
Nov 22 08:20:26 compute-0 rsyslogd[236668]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:20:26 compute-0 rsyslogd[236668]: Warning: Certificate file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Nov 22 08:20:26 compute-0 rsyslogd[236668]: Warning: Key file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Nov 22 08:20:26 compute-0 rsyslogd[236668]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2506.0-2.el9]
Nov 22 08:20:26 compute-0 sudo[236662]: pam_unix(sudo:session): session closed for user root
Nov 22 08:20:26 compute-0 rsyslogd[236668]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2506.0-2.el9]
Nov 22 08:20:26 compute-0 sshd-session[235160]: Connection closed by 192.168.122.30 port 52960
Nov 22 08:20:26 compute-0 sshd-session[235157]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:20:26 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 22 08:20:26 compute-0 systemd[1]: session-28.scope: Consumed 10.193s CPU time.
Nov 22 08:20:26 compute-0 systemd-logind[826]: Session 28 logged out. Waiting for processes to exit.
Nov 22 08:20:26 compute-0 systemd-logind[826]: Removed session 28.
Nov 22 08:20:27 compute-0 podman[236697]: 2025-11-22 08:20:27.118821659 +0000 UTC m=+0.071806661 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, config_id=edpm)
Nov 22 08:20:27 compute-0 podman[236698]: 2025-11-22 08:20:27.159608759 +0000 UTC m=+0.111449841 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 22 08:20:29 compute-0 podman[203476]: time="2025-11-22T08:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:20:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:20:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Nov 22 08:20:31 compute-0 openstack_network_exporter[205661]: ERROR   08:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:20:31 compute-0 openstack_network_exporter[205661]: ERROR   08:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:20:31 compute-0 openstack_network_exporter[205661]: ERROR   08:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:20:31 compute-0 openstack_network_exporter[205661]: ERROR   08:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:20:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:20:31 compute-0 openstack_network_exporter[205661]: ERROR   08:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:20:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:20:33 compute-0 podman[236737]: 2025-11-22 08:20:33.122918785 +0000 UTC m=+0.069749037 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 22 08:20:35 compute-0 podman[236759]: 2025-11-22 08:20:35.132254811 +0000 UTC m=+0.089667604 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:20:48 compute-0 podman[236783]: 2025-11-22 08:20:48.147639422 +0000 UTC m=+0.093495335 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:20:48 compute-0 podman[236782]: 2025-11-22 08:20:48.149503632 +0000 UTC m=+0.089975592 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:20:48 compute-0 podman[236781]: 2025-11-22 08:20:48.177346758 +0000 UTC m=+0.125418710 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:20:55 compute-0 podman[236840]: 2025-11-22 08:20:55.126353335 +0000 UTC m=+0.068579476 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 08:20:55 compute-0 podman[236839]: 2025-11-22 08:20:55.153618056 +0000 UTC m=+0.102225276 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 22 08:20:58 compute-0 podman[236876]: 2025-11-22 08:20:58.159849707 +0000 UTC m=+0.109368835 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 22 08:20:58 compute-0 podman[236877]: 2025-11-22 08:20:58.175014129 +0000 UTC m=+0.117419848 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 08:20:59 compute-0 podman[203476]: time="2025-11-22T08:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:20:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:20:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4281 "" "Go-http-client/1.1"
Nov 22 08:21:01 compute-0 openstack_network_exporter[205661]: ERROR   08:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:21:01 compute-0 openstack_network_exporter[205661]: ERROR   08:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:21:01 compute-0 openstack_network_exporter[205661]: ERROR   08:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:21:01 compute-0 openstack_network_exporter[205661]: ERROR   08:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:21:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:21:01 compute-0 openstack_network_exporter[205661]: ERROR   08:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:21:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:21:04 compute-0 podman[236918]: 2025-11-22 08:21:04.171181087 +0000 UTC m=+0.111652195 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Nov 22 08:21:06 compute-0 podman[236938]: 2025-11-22 08:21:06.125961641 +0000 UTC m=+0.082088834 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:21:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:21:09.950 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:21:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:21:09.952 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:21:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:21:09.952 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:21:12 compute-0 sshd-session[236961]: Accepted publickey for zuul from 38.129.56.128 port 33906 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 08:21:12 compute-0 systemd-logind[826]: New session 29 of user zuul.
Nov 22 08:21:12 compute-0 systemd[1]: Started Session 29 of User zuul.
Nov 22 08:21:12 compute-0 sshd-session[236961]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:21:13 compute-0 python3[237138]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:21:15 compute-0 sudo[237359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iffdkrepzugnnvbmikcitvnbnlveqpcq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799674.670511-36746-97274895067174/AnsiballZ_command.py'
Nov 22 08:21:15 compute-0 sudo[237359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:21:15 compute-0 python3[237361]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:21:15 compute-0 sudo[237359]: pam_unix(sudo:session): session closed for user root
Nov 22 08:21:16 compute-0 sudo[237512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxordvenzyesuxdzhhehwykhxjpixqbu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799675.71462-36757-8492064704205/AnsiballZ_command.py'
Nov 22 08:21:16 compute-0 sudo[237512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:21:16 compute-0 nova_compute[189268]: 2025-11-22 08:21:16.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:16 compute-0 python3[237514]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:21:17 compute-0 sudo[237512]: pam_unix(sudo:session): session closed for user root
Nov 22 08:21:18 compute-0 python3[237665]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 08:21:19 compute-0 sudo[237851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzsjvouiwjhkrcxlvbxqvzfzqtewkho ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799678.701791-36801-63320566482759/AnsiballZ_setup.py'
Nov 22 08:21:19 compute-0 sudo[237851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:21:19 compute-0 podman[237792]: 2025-11-22 08:21:19.120863239 +0000 UTC m=+0.074198748 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:21:19 compute-0 podman[237793]: 2025-11-22 08:21:19.124365684 +0000 UTC m=+0.073202281 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:21:19 compute-0 podman[237791]: 2025-11-22 08:21:19.128171578 +0000 UTC m=+0.083951663 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:21:19 compute-0 python3[237873]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 08:21:20 compute-0 nova_compute[189268]: 2025-11-22 08:21:20.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:20 compute-0 nova_compute[189268]: 2025-11-22 08:21:20.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:21:20 compute-0 nova_compute[189268]: 2025-11-22 08:21:20.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:21:20 compute-0 nova_compute[189268]: 2025-11-22 08:21:20.113 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:21:20 compute-0 nova_compute[189268]: 2025-11-22 08:21:20.113 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:20 compute-0 nova_compute[189268]: 2025-11-22 08:21:20.114 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:20 compute-0 sudo[237851]: pam_unix(sudo:session): session closed for user root
Nov 22 08:21:21 compute-0 nova_compute[189268]: 2025-11-22 08:21:21.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:21 compute-0 nova_compute[189268]: 2025-11-22 08:21:21.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:21 compute-0 nova_compute[189268]: 2025-11-22 08:21:21.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:21 compute-0 nova_compute[189268]: 2025-11-22 08:21:21.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:21:21 compute-0 sudo[238097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsvvmltgvgozkdyhuopggizxufajbnoc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799680.9392827-36830-86594265203982/AnsiballZ_command.py'
Nov 22 08:21:21 compute-0 sudo[238097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:21:21 compute-0 python3[238099]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:21:21 compute-0 sudo[238097]: pam_unix(sudo:session): session closed for user root
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.127 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:21:22 compute-0 sudo[238261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqzqaauhycyiuyijxpimhxmkcknetud ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763799681.8951507-36847-87374065114320/AnsiballZ_command.py'
Nov 22 08:21:22 compute-0 sudo[238261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:21:22 compute-0 python3[238263]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.456 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.458 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5697MB free_disk=72.55656433105469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.458 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.458 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.513 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.514 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:21:22 compute-0 sudo[238261]: pam_unix(sudo:session): session closed for user root
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.536 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.550 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.552 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:21:22 compute-0 nova_compute[189268]: 2025-11-22 08:21:22.552 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:21:23 compute-0 nova_compute[189268]: 2025-11-22 08:21:23.547 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:21:26 compute-0 podman[238302]: 2025-11-22 08:21:26.118022091 +0000 UTC m=+0.075977786 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:21:26 compute-0 podman[238303]: 2025-11-22 08:21:26.117877767 +0000 UTC m=+0.069923012 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:21:29 compute-0 podman[238343]: 2025-11-22 08:21:29.128169343 +0000 UTC m=+0.084362104 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Nov 22 08:21:29 compute-0 podman[238344]: 2025-11-22 08:21:29.203849161 +0000 UTC m=+0.145112646 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:21:29 compute-0 podman[203476]: time="2025-11-22T08:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:21:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:21:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4282 "" "Go-http-client/1.1"
Nov 22 08:21:31 compute-0 openstack_network_exporter[205661]: ERROR   08:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:21:31 compute-0 openstack_network_exporter[205661]: ERROR   08:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:21:31 compute-0 openstack_network_exporter[205661]: ERROR   08:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:21:31 compute-0 openstack_network_exporter[205661]: ERROR   08:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:21:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:21:31 compute-0 openstack_network_exporter[205661]: ERROR   08:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:21:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:21:35 compute-0 podman[238387]: 2025-11-22 08:21:35.132046822 +0000 UTC m=+0.081799024 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6)
Nov 22 08:21:37 compute-0 podman[238409]: 2025-11-22 08:21:37.133099862 +0000 UTC m=+0.081506167 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:21:50 compute-0 podman[238436]: 2025-11-22 08:21:50.12775837 +0000 UTC m=+0.070681012 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 08:21:50 compute-0 podman[238434]: 2025-11-22 08:21:50.133618531 +0000 UTC m=+0.083953334 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:21:50 compute-0 podman[238435]: 2025-11-22 08:21:50.143489389 +0000 UTC m=+0.091284324 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:21:57 compute-0 podman[238492]: 2025-11-22 08:21:57.107010256 +0000 UTC m=+0.063432875 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 08:21:57 compute-0 podman[238491]: 2025-11-22 08:21:57.126984158 +0000 UTC m=+0.085285929 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm)
Nov 22 08:21:59 compute-0 podman[203476]: time="2025-11-22T08:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:21:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:21:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4283 "" "Go-http-client/1.1"
Nov 22 08:22:00 compute-0 podman[238528]: 2025-11-22 08:22:00.13303751 +0000 UTC m=+0.087350096 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, release=1214.1726694543, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 22 08:22:00 compute-0 podman[238529]: 2025-11-22 08:22:00.153766593 +0000 UTC m=+0.103573106 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 22 08:22:01 compute-0 openstack_network_exporter[205661]: ERROR   08:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:22:01 compute-0 openstack_network_exporter[205661]: ERROR   08:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:22:01 compute-0 openstack_network_exporter[205661]: ERROR   08:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:22:01 compute-0 openstack_network_exporter[205661]: ERROR   08:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:22:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:22:01 compute-0 openstack_network_exporter[205661]: ERROR   08:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:22:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:22:06 compute-0 podman[238573]: 2025-11-22 08:22:06.156753549 +0000 UTC m=+0.107353430 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Nov 22 08:22:08 compute-0 podman[238593]: 2025-11-22 08:22:08.116724451 +0000 UTC m=+0.070322462 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:22:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:22:09.951 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:22:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:22:09.952 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:22:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:22:09.952 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:22:17 compute-0 nova_compute[189268]: 2025-11-22 08:22:17.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:20 compute-0 nova_compute[189268]: 2025-11-22 08:22:20.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:20 compute-0 nova_compute[189268]: 2025-11-22 08:22:20.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:22:20 compute-0 nova_compute[189268]: 2025-11-22 08:22:20.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:22:20 compute-0 nova_compute[189268]: 2025-11-22 08:22:20.111 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:22:20 compute-0 nova_compute[189268]: 2025-11-22 08:22:20.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:21 compute-0 nova_compute[189268]: 2025-11-22 08:22:21.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:21 compute-0 nova_compute[189268]: 2025-11-22 08:22:21.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:21 compute-0 nova_compute[189268]: 2025-11-22 08:22:21.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:22:21 compute-0 podman[238620]: 2025-11-22 08:22:21.107906924 +0000 UTC m=+0.064481009 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:22:21 compute-0 podman[238621]: 2025-11-22 08:22:21.108471038 +0000 UTC m=+0.062366655 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:22:21 compute-0 podman[238619]: 2025-11-22 08:22:21.149654055 +0000 UTC m=+0.109776620 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.087 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.088 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.088 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.088 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 nova_compute[189268]: 2025-11-22 08:22:22.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 nova_compute[189268]: 2025-11-22 08:22:22.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:22:22.103 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:22:22 compute-0 sshd-session[236964]: Received disconnect from 38.129.56.128 port 33906:11: disconnected by user
Nov 22 08:22:22 compute-0 sshd-session[236964]: Disconnected from user zuul 38.129.56.128 port 33906
Nov 22 08:22:22 compute-0 sshd-session[236961]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:22:22 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 22 08:22:22 compute-0 systemd[1]: session-29.scope: Consumed 7.702s CPU time.
Nov 22 08:22:22 compute-0 systemd-logind[826]: Session 29 logged out. Waiting for processes to exit.
Nov 22 08:22:22 compute-0 systemd-logind[826]: Removed session 29.
Nov 22 08:22:23 compute-0 nova_compute[189268]: 2025-11-22 08:22:23.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.118 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.118 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.118 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.119 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.466 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.467 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5722MB free_disk=72.55670928955078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.467 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.468 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.522 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.523 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.547 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.562 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.564 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:22:24 compute-0 nova_compute[189268]: 2025-11-22 08:22:24.564 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:22:28 compute-0 podman[238677]: 2025-11-22 08:22:28.112584675 +0000 UTC m=+0.070517483 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 22 08:22:28 compute-0 podman[238678]: 2025-11-22 08:22:28.131411323 +0000 UTC m=+0.087018751 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:22:29 compute-0 podman[203476]: time="2025-11-22T08:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:22:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:22:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4285 "" "Go-http-client/1.1"
Nov 22 08:22:31 compute-0 podman[238717]: 2025-11-22 08:22:31.158508374 +0000 UTC m=+0.105710416 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 22 08:22:31 compute-0 podman[238718]: 2025-11-22 08:22:31.205012925 +0000 UTC m=+0.144696807 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:22:31 compute-0 openstack_network_exporter[205661]: ERROR   08:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:22:31 compute-0 openstack_network_exporter[205661]: ERROR   08:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:22:31 compute-0 openstack_network_exporter[205661]: ERROR   08:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:22:31 compute-0 openstack_network_exporter[205661]: ERROR   08:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:22:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:22:31 compute-0 openstack_network_exporter[205661]: ERROR   08:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:22:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:22:37 compute-0 sshd-session[238759]: Invalid user loginuser from 80.94.92.164 port 50102
Nov 22 08:22:37 compute-0 podman[238761]: 2025-11-22 08:22:37.140572495 +0000 UTC m=+0.098588956 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 22 08:22:37 compute-0 sshd-session[238759]: Connection closed by invalid user loginuser 80.94.92.164 port 50102 [preauth]
Nov 22 08:22:39 compute-0 podman[238782]: 2025-11-22 08:22:39.109922202 +0000 UTC m=+0.066905641 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:22:52 compute-0 podman[238808]: 2025-11-22 08:22:52.112802376 +0000 UTC m=+0.060560278 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:22:52 compute-0 podman[238809]: 2025-11-22 08:22:52.129811688 +0000 UTC m=+0.072056300 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:22:52 compute-0 podman[238807]: 2025-11-22 08:22:52.147527428 +0000 UTC m=+0.098404770 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 08:22:59 compute-0 podman[238866]: 2025-11-22 08:22:59.120782242 +0000 UTC m=+0.079647295 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:22:59 compute-0 podman[238865]: 2025-11-22 08:22:59.159139677 +0000 UTC m=+0.119358124 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:22:59 compute-0 podman[203476]: time="2025-11-22T08:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:22:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:22:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4283 "" "Go-http-client/1.1"
Nov 22 08:23:01 compute-0 openstack_network_exporter[205661]: ERROR   08:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:23:01 compute-0 openstack_network_exporter[205661]: ERROR   08:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:23:01 compute-0 openstack_network_exporter[205661]: ERROR   08:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:23:01 compute-0 openstack_network_exporter[205661]: ERROR   08:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:23:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:23:01 compute-0 openstack_network_exporter[205661]: ERROR   08:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:23:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:23:02 compute-0 podman[238900]: 2025-11-22 08:23:02.121996516 +0000 UTC m=+0.082449636 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, container_name=kepler, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 22 08:23:02 compute-0 podman[238901]: 2025-11-22 08:23:02.15052693 +0000 UTC m=+0.106193038 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 08:23:08 compute-0 podman[238944]: 2025-11-22 08:23:08.111969126 +0000 UTC m=+0.068066920 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Nov 22 08:23:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:23:09.953 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:23:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:23:09.953 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:23:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:23:09.953 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:23:10 compute-0 podman[238965]: 2025-11-22 08:23:10.10757146 +0000 UTC m=+0.063260518 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:23:19 compute-0 nova_compute[189268]: 2025-11-22 08:23:19.565 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:20 compute-0 nova_compute[189268]: 2025-11-22 08:23:20.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:20 compute-0 nova_compute[189268]: 2025-11-22 08:23:20.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:23:20 compute-0 nova_compute[189268]: 2025-11-22 08:23:20.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:23:20 compute-0 nova_compute[189268]: 2025-11-22 08:23:20.110 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:23:20 compute-0 nova_compute[189268]: 2025-11-22 08:23:20.111 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:22 compute-0 nova_compute[189268]: 2025-11-22 08:23:22.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:22 compute-0 nova_compute[189268]: 2025-11-22 08:23:22.102 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:23 compute-0 nova_compute[189268]: 2025-11-22 08:23:23.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:23 compute-0 nova_compute[189268]: 2025-11-22 08:23:23.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:23 compute-0 nova_compute[189268]: 2025-11-22 08:23:23.111 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:23:23 compute-0 podman[238990]: 2025-11-22 08:23:23.113427948 +0000 UTC m=+0.066453394 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:23:23 compute-0 podman[238991]: 2025-11-22 08:23:23.128517867 +0000 UTC m=+0.078321215 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 08:23:23 compute-0 podman[238989]: 2025-11-22 08:23:23.143902834 +0000 UTC m=+0.100336962 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:23:24 compute-0 nova_compute[189268]: 2025-11-22 08:23:24.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:24 compute-0 nova_compute[189268]: 2025-11-22 08:23:24.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.126 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.414 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.415 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.55672836303711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.415 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.416 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.485 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.486 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.522 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.535 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.536 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:23:26 compute-0 nova_compute[189268]: 2025-11-22 08:23:26.537 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:23:29 compute-0 podman[203476]: time="2025-11-22T08:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:23:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:23:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4286 "" "Go-http-client/1.1"
Nov 22 08:23:30 compute-0 podman[239046]: 2025-11-22 08:23:30.116905105 +0000 UTC m=+0.071910171 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 08:23:30 compute-0 podman[239047]: 2025-11-22 08:23:30.138236073 +0000 UTC m=+0.092020417 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:23:31 compute-0 openstack_network_exporter[205661]: ERROR   08:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:23:31 compute-0 openstack_network_exporter[205661]: ERROR   08:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:23:31 compute-0 openstack_network_exporter[205661]: ERROR   08:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:23:31 compute-0 openstack_network_exporter[205661]: ERROR   08:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:23:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:23:31 compute-0 openstack_network_exporter[205661]: ERROR   08:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:23:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:23:33 compute-0 podman[239083]: 2025-11-22 08:23:33.108047087 +0000 UTC m=+0.067891592 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 22 08:23:33 compute-0 podman[239084]: 2025-11-22 08:23:33.175961459 +0000 UTC m=+0.132803472 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:23:33 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:23:33.702 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:23:33 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:23:33.704 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:23:33 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:23:33.704 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:23:39 compute-0 podman[239128]: 2025-11-22 08:23:39.113711976 +0000 UTC m=+0.071895021 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 22 08:23:41 compute-0 podman[239149]: 2025-11-22 08:23:41.095899597 +0000 UTC m=+0.054076347 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:23:54 compute-0 podman[239174]: 2025-11-22 08:23:54.106090392 +0000 UTC m=+0.065469256 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 08:23:54 compute-0 podman[239175]: 2025-11-22 08:23:54.107794969 +0000 UTC m=+0.063311699 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:23:54 compute-0 podman[239176]: 2025-11-22 08:23:54.133582138 +0000 UTC m=+0.083934197 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:23:59 compute-0 podman[203476]: time="2025-11-22T08:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:23:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:23:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4280 "" "Go-http-client/1.1"
Nov 22 08:24:01 compute-0 podman[239238]: 2025-11-22 08:24:01.109469366 +0000 UTC m=+0.059846295 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:24:01 compute-0 podman[239237]: 2025-11-22 08:24:01.140215569 +0000 UTC m=+0.094597255 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS)
Nov 22 08:24:01 compute-0 openstack_network_exporter[205661]: ERROR   08:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:24:01 compute-0 openstack_network_exporter[205661]: ERROR   08:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:24:01 compute-0 openstack_network_exporter[205661]: ERROR   08:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:24:01 compute-0 openstack_network_exporter[205661]: ERROR   08:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:24:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:24:01 compute-0 openstack_network_exporter[205661]: ERROR   08:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:24:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:24:04 compute-0 podman[239277]: 2025-11-22 08:24:04.123003695 +0000 UTC m=+0.077916473 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 22 08:24:04 compute-0 podman[239278]: 2025-11-22 08:24:04.192352466 +0000 UTC m=+0.144192041 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 22 08:24:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:09.955 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:09.955 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:09.955 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:10 compute-0 podman[239322]: 2025-11-22 08:24:10.113072083 +0000 UTC m=+0.065305743 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm)
Nov 22 08:24:12 compute-0 podman[239343]: 2025-11-22 08:24:12.099373485 +0000 UTC m=+0.053673206 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:24:19 compute-0 nova_compute[189268]: 2025-11-22 08:24:19.535 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:20 compute-0 nova_compute[189268]: 2025-11-22 08:24:20.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:20 compute-0 nova_compute[189268]: 2025-11-22 08:24:20.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:24:20 compute-0 nova_compute[189268]: 2025-11-22 08:24:20.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:24:20 compute-0 nova_compute[189268]: 2025-11-22 08:24:20.112 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:24:21 compute-0 nova_compute[189268]: 2025-11-22 08:24:21.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.088 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.088 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.089 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.091 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.092 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.093 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 nova_compute[189268]: 2025-11-22 08:24:22.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.096 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 nova_compute[189268]: 2025-11-22 08:24:22.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.098 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.100 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.101 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:24:22.102 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:24:23 compute-0 nova_compute[189268]: 2025-11-22 08:24:23.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:23 compute-0 nova_compute[189268]: 2025-11-22 08:24:23.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:24:24 compute-0 nova_compute[189268]: 2025-11-22 08:24:24.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:25 compute-0 nova_compute[189268]: 2025-11-22 08:24:25.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:25 compute-0 podman[239370]: 2025-11-22 08:24:25.118820843 +0000 UTC m=+0.064503684 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:24:25 compute-0 podman[239369]: 2025-11-22 08:24:25.120231371 +0000 UTC m=+0.070695022 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 08:24:25 compute-0 podman[239371]: 2025-11-22 08:24:25.153154541 +0000 UTC m=+0.092343706 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.466 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.467 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.55672836303711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.467 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.468 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.551 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.551 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.575 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.586 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.588 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:24:26 compute-0 nova_compute[189268]: 2025-11-22 08:24:26.588 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:29 compute-0 podman[203476]: time="2025-11-22T08:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:24:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:24:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4290 "" "Go-http-client/1.1"
Nov 22 08:24:31 compute-0 openstack_network_exporter[205661]: ERROR   08:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:24:31 compute-0 openstack_network_exporter[205661]: ERROR   08:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:24:31 compute-0 openstack_network_exporter[205661]: ERROR   08:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:24:31 compute-0 openstack_network_exporter[205661]: ERROR   08:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:24:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:24:31 compute-0 openstack_network_exporter[205661]: ERROR   08:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:24:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:24:32 compute-0 podman[239429]: 2025-11-22 08:24:32.110628898 +0000 UTC m=+0.067983699 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:24:32 compute-0 podman[239428]: 2025-11-22 08:24:32.113805923 +0000 UTC m=+0.074024552 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:24:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:34.008 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:24:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:34.009 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:24:35 compute-0 podman[239467]: 2025-11-22 08:24:35.128152715 +0000 UTC m=+0.081907935 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, container_name=kepler)
Nov 22 08:24:35 compute-0 podman[239468]: 2025-11-22 08:24:35.172181695 +0000 UTC m=+0.125860773 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 08:24:41 compute-0 podman[239510]: 2025-11-22 08:24:41.105650085 +0000 UTC m=+0.066938711 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 22 08:24:43 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:43.012 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:24:43 compute-0 podman[239532]: 2025-11-22 08:24:43.098274802 +0000 UTC m=+0.056377705 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.396 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.396 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.487 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.785 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.786 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.795 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.796 189273 INFO nova.compute.claims [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.895 189273 DEBUG nova.compute.provider_tree [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.907 189273 DEBUG nova.scheduler.client.report [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.932 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.933 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.973 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:24:45 compute-0 nova_compute[189268]: 2025-11-22 08:24:45.974 189273 DEBUG nova.network.neutron [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.014 189273 INFO nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.103 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.336 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.337 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.338 189273 INFO nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Creating image(s)
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.338 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.338 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.339 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.339 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.340 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.639 189273 WARNING oslo_policy.policy [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 22 08:24:46 compute-0 nova_compute[189268]: 2025-11-22 08:24:46.639 189273 WARNING oslo_policy.policy [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 22 08:24:47 compute-0 nova_compute[189268]: 2025-11-22 08:24:47.882 189273 DEBUG nova.network.neutron [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Successfully created port: 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:24:47 compute-0 nova_compute[189268]: 2025-11-22 08:24:47.988 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:48 compute-0 nova_compute[189268]: 2025-11-22 08:24:48.051 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:48 compute-0 nova_compute[189268]: 2025-11-22 08:24:48.052 189273 DEBUG nova.virt.images [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] de9f57cf-28b4-4cbd-b943-19aa098356bf was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 22 08:24:48 compute-0 nova_compute[189268]: 2025-11-22 08:24:48.233 189273 DEBUG nova.privsep.utils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 08:24:48 compute-0 nova_compute[189268]: 2025-11-22 08:24:48.234 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.part /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.161 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.part /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.converted" returned: 0 in 0.927s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.165 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.228 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4.converted --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.229 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.242 189273 INFO oslo.privsep.daemon [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpjx2fvkhs/privsep.sock']
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.461 189273 DEBUG nova.network.neutron [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Successfully updated port: 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.478 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.479 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.479 189273 DEBUG nova.network.neutron [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.662 189273 DEBUG nova.network.neutron [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.964 189273 INFO oslo.privsep.daemon [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Spawned new privsep daemon via rootwrap
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.967 189273 DEBUG nova.compute.manager [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-changed-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.968 189273 DEBUG nova.compute.manager [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Refreshing instance network info cache due to event network-changed-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.968 189273 DEBUG oslo_concurrency.lockutils [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.827 239575 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.833 239575 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.835 239575 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 08:24:49 compute-0 nova_compute[189268]: 2025-11-22 08:24:49.836 239575 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239575
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.062 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.128 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.129 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.130 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.143 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.203 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.205 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.249 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.252 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.253 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.312 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.314 189273 DEBUG nova.virt.disk.api [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Checking if we can resize image /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.314 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.374 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.375 189273 DEBUG nova.virt.disk.api [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Cannot resize image /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.375 189273 DEBUG nova.objects.instance [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'migration_context' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.389 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.389 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.390 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.390 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.391 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.391 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.414 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.414 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.452 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.453 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.470 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.537 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.538 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.539 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.555 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.612 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.613 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.671 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 1073741824" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.672 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.673 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.731 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.732 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.733 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Ensure instance console log exists: /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.733 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.734 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.734 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.797 189273 DEBUG nova.network.neutron [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.833 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.834 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Instance network_info: |[{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.835 189273 DEBUG oslo_concurrency.lockutils [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.835 189273 DEBUG nova.network.neutron [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Refreshing network info cache for port 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.838 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Start _get_guest_xml network_info=[{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}], 'ephemerals': [{'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1, 'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.848 189273 WARNING nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.859 189273 DEBUG nova.virt.libvirt.host [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.860 189273 DEBUG nova.virt.libvirt.host [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.864 189273 DEBUG nova.virt.libvirt.host [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.864 189273 DEBUG nova.virt.libvirt.host [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.865 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.865 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:23:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='796e25a8-f28d-499e-b2fb-dfae32f0eed7',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.865 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.866 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.866 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.866 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.866 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.866 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.867 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.867 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.867 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.867 189273 DEBUG nova.virt.hardware [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.871 189273 DEBUG nova.privsep.utils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.872 189273 DEBUG nova.virt.libvirt.vif [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:24:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-mmjvr90v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:24:46Z,user_data=None,user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=78b5db02-f49a-4c0b-b4f6-8d3b3d689e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.873 189273 DEBUG nova.network.os_vif_util [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.874 189273 DEBUG nova.network.os_vif_util [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:4a:5d,bridge_name='br-int',has_traffic_filtering=True,id=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4645bc8c-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.875 189273 DEBUG nova.objects.instance [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.890 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <uuid>78b5db02-f49a-4c0b-b4f6-8d3b3d689e66</uuid>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <name>instance-00000001</name>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <memory>524288</memory>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <nova:name>test_0</nova:name>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:24:50</nova:creationTime>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <nova:flavor name="m1.small">
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:memory>512</nova:memory>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:ephemeral>1</nova:ephemeral>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:user uuid="27ed1dd009ad4e29863ab5e3a9826c94">admin</nova:user>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:project uuid="80e46844b3824928a6138235e5ede512">admin</nova:project>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="de9f57cf-28b4-4cbd-b943-19aa098356bf"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         <nova:port uuid="4645bc8c-a850-4f1b-9ebc-89d2ba862ffe">
Nov 22 08:24:50 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="192.168.0.53" ipVersion="4"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <system>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <entry name="serial">78b5db02-f49a-4c0b-b4f6-8d3b3d689e66</entry>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <entry name="uuid">78b5db02-f49a-4c0b-b4f6-8d3b3d689e66</entry>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </system>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <os>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   </os>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <features>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   </features>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <target dev="vdb" bus="virtio"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.config"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:4f:4a:5d"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <target dev="tap4645bc8c-a8"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/console.log" append="off"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <video>
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </video>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:24:50 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:24:50 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:24:50 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:24:50 compute-0 nova_compute[189268]: </domain>
Nov 22 08:24:50 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.891 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Preparing to wait for external event network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.891 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.891 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.891 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.892 189273 DEBUG nova.virt.libvirt.vif [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:24:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-mmjvr90v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:24:46Z,user_data=None,user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=78b5db02-f49a-4c0b-b4f6-8d3b3d689e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.892 189273 DEBUG nova.network.os_vif_util [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.893 189273 DEBUG nova.network.os_vif_util [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:4a:5d,bridge_name='br-int',has_traffic_filtering=True,id=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4645bc8c-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.893 189273 DEBUG os_vif [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:4a:5d,bridge_name='br-int',has_traffic_filtering=True,id=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4645bc8c-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.927 189273 DEBUG ovsdbapp.backend.ovs_idl [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.928 189273 DEBUG ovsdbapp.backend.ovs_idl [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.928 189273 DEBUG ovsdbapp.backend.ovs_idl [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.928 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.929 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.929 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.930 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.931 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.933 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.943 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.944 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.944 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:24:50 compute-0 nova_compute[189268]: 2025-11-22 08:24:50.945 189273 INFO oslo.privsep.daemon [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp8tw885h8/privsep.sock']
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.529 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.624 189273 INFO oslo.privsep.daemon [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Spawned new privsep daemon via rootwrap
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.492 239612 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.495 239612 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.497 239612 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.498 239612 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239612
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.942 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.942 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4645bc8c-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.943 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4645bc8c-a8, col_values=(('external_ids', {'iface-id': '4645bc8c-a850-4f1b-9ebc-89d2ba862ffe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:4a:5d', 'vm-uuid': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.945 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:51 compute-0 NetworkManager[56326]: <info>  [1763799891.9468] manager: (tap4645bc8c-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.947 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.953 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:51 compute-0 nova_compute[189268]: 2025-11-22 08:24:51.955 189273 INFO os_vif [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:4a:5d,bridge_name='br-int',has_traffic_filtering=True,id=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4645bc8c-a8')
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.015 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.015 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.016 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.016 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No VIF found with MAC fa:16:3e:4f:4a:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.017 189273 INFO nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Using config drive
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.577 189273 DEBUG nova.network.neutron [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated VIF entry in instance network info cache for port 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.578 189273 DEBUG nova.network.neutron [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.593 189273 DEBUG oslo_concurrency.lockutils [req-e2033137-3f89-411e-b108-db9d87a346f0 req-e9cba23b-dd3b-4bbe-bbf9-f69d8f6ad08f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.919 189273 INFO nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Creating config drive at /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.config
Nov 22 08:24:52 compute-0 nova_compute[189268]: 2025-11-22 08:24:52.925 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzbmz99b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.052 189273 DEBUG oslo_concurrency.processutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzbmz99b" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:24:53 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 22 08:24:53 compute-0 kernel: tap4645bc8c-a8: entered promiscuous mode
Nov 22 08:24:53 compute-0 NetworkManager[56326]: <info>  [1763799893.1448] manager: (tap4645bc8c-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.146 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:53 compute-0 ovn_controller[97783]: 2025-11-22T08:24:53Z|00027|binding|INFO|Claiming lport 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe for this chassis.
Nov 22 08:24:53 compute-0 ovn_controller[97783]: 2025-11-22T08:24:53Z|00028|binding|INFO|4645bc8c-a850-4f1b-9ebc-89d2ba862ffe: Claiming fa:16:3e:4f:4a:5d 192.168.0.53
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.153 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:53 compute-0 systemd-udevd[239639]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:24:53 compute-0 NetworkManager[56326]: <info>  [1763799893.1982] device (tap4645bc8c-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:24:53 compute-0 NetworkManager[56326]: <info>  [1763799893.1989] device (tap4645bc8c-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.223 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:4a:5d 192.168.0.53'], port_security=['fa:16:3e:4f:4a:5d 192.168.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.53/24', 'neutron:device_id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.225 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 bound to our chassis
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.226 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.229 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.228 106642 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpue4g_u1g/privsep.sock']
Nov 22 08:24:53 compute-0 ovn_controller[97783]: 2025-11-22T08:24:53Z|00029|binding|INFO|Setting lport 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe ovn-installed in OVS
Nov 22 08:24:53 compute-0 ovn_controller[97783]: 2025-11-22T08:24:53Z|00030|binding|INFO|Setting lport 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe up in Southbound
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.242 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:53 compute-0 systemd-machined[155703]: New machine qemu-1-instance-00000001.
Nov 22 08:24:53 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.741 189273 DEBUG nova.compute.manager [req-94b4b4ff-561e-4505-97dc-9359a2240862 req-e5c3b25b-54f1-412f-aee2-350902ad1505 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.743 189273 DEBUG oslo_concurrency.lockutils [req-94b4b4ff-561e-4505-97dc-9359a2240862 req-e5c3b25b-54f1-412f-aee2-350902ad1505 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.743 189273 DEBUG oslo_concurrency.lockutils [req-94b4b4ff-561e-4505-97dc-9359a2240862 req-e5c3b25b-54f1-412f-aee2-350902ad1505 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.744 189273 DEBUG oslo_concurrency.lockutils [req-94b4b4ff-561e-4505-97dc-9359a2240862 req-e5c3b25b-54f1-412f-aee2-350902ad1505 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.744 189273 DEBUG nova.compute.manager [req-94b4b4ff-561e-4505-97dc-9359a2240862 req-e5c3b25b-54f1-412f-aee2-350902ad1505 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Processing event network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.820 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.821 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763799893.8196092, 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.822 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] VM Started (Lifecycle Event)
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.826 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.841 189273 INFO nova.virt.libvirt.driver [-] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Instance spawned successfully.
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.841 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.877 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.884 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.896 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.898 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.898 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.899 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.899 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.900 189273 DEBUG nova.virt.libvirt.driver [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.904 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.904 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763799893.819757, 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.905 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] VM Paused (Lifecycle Event)
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.970 106642 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.970 106642 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpue4g_u1g/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.829 239666 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.833 239666 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.835 239666 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.835 239666 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239666
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.973 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:24:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:53.974 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[daa7b293-2c7e-4403-83fe-bf66379ba91f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.979 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763799893.8253736, 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.980 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] VM Resumed (Lifecycle Event)
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.985 189273 INFO nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Took 7.65 seconds to spawn the instance on the hypervisor.
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.986 189273 DEBUG nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:24:53 compute-0 nova_compute[189268]: 2025-11-22 08:24:53.996 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:24:54 compute-0 nova_compute[189268]: 2025-11-22 08:24:54.001 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:24:54 compute-0 nova_compute[189268]: 2025-11-22 08:24:54.031 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:24:54 compute-0 nova_compute[189268]: 2025-11-22 08:24:54.068 189273 INFO nova.compute.manager [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Took 8.31 seconds to build instance.
Nov 22 08:24:54 compute-0 nova_compute[189268]: 2025-11-22 08:24:54.090 189273 DEBUG oslo_concurrency.lockutils [None req-8a351295-1026-4768-b6db-ae6b3f59c0ed 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:54.496 239666 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:54.496 239666 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:54.497 239666 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.193 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0b3f14-ba64-4b99-aa17-4f91c55da357]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.194 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02517cc7-81 in ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.197 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02517cc7-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.197 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[2e111986-5042-4d28-bb90-4f0ed5a7c0d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.200 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0d20ac-3ded-483c-ad76-9a9793fa3561]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.224 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[f03800d5-cb2d-4e44-9033-ef26575fa4a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.261 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[fbeb9c35-d9c7-432d-9194-23ef8826d3ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.263 106642 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpoy5rokmg/privsep.sock']
Nov 22 08:24:55 compute-0 podman[239675]: 2025-11-22 08:24:55.32956279 +0000 UTC m=+0.074999949 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 08:24:55 compute-0 podman[239677]: 2025-11-22 08:24:55.334287927 +0000 UTC m=+0.079025756 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:24:55 compute-0 podman[239678]: 2025-11-22 08:24:55.360022843 +0000 UTC m=+0.101957857 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:24:55 compute-0 nova_compute[189268]: 2025-11-22 08:24:55.841 189273 DEBUG nova.compute.manager [req-3b97a8a1-3e58-447c-a7a7-4fe9096d3102 req-4bcb06f8-5491-425f-a7e4-003ec73ed051 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:24:55 compute-0 nova_compute[189268]: 2025-11-22 08:24:55.843 189273 DEBUG oslo_concurrency.lockutils [req-3b97a8a1-3e58-447c-a7a7-4fe9096d3102 req-4bcb06f8-5491-425f-a7e4-003ec73ed051 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:55 compute-0 nova_compute[189268]: 2025-11-22 08:24:55.843 189273 DEBUG oslo_concurrency.lockutils [req-3b97a8a1-3e58-447c-a7a7-4fe9096d3102 req-4bcb06f8-5491-425f-a7e4-003ec73ed051 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:55 compute-0 nova_compute[189268]: 2025-11-22 08:24:55.844 189273 DEBUG oslo_concurrency.lockutils [req-3b97a8a1-3e58-447c-a7a7-4fe9096d3102 req-4bcb06f8-5491-425f-a7e4-003ec73ed051 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:55 compute-0 nova_compute[189268]: 2025-11-22 08:24:55.844 189273 DEBUG nova.compute.manager [req-3b97a8a1-3e58-447c-a7a7-4fe9096d3102 req-4bcb06f8-5491-425f-a7e4-003ec73ed051 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] No waiting events found dispatching network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:24:55 compute-0 nova_compute[189268]: 2025-11-22 08:24:55.845 189273 WARNING nova.compute.manager [req-3b97a8a1-3e58-447c-a7a7-4fe9096d3102 req-4bcb06f8-5491-425f-a7e4-003ec73ed051 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received unexpected event network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe for instance with vm_state active and task_state None.
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:56.002 106642 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:56.003 106642 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpoy5rokmg/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.857 239736 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.862 239736 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.865 239736 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:55.865 239736 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239736
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:56.006 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[12d1677a-89c4-44ca-80f7-f94da349a4d3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:56 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 08:24:56 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 08:24:56 compute-0 nova_compute[189268]: 2025-11-22 08:24:56.531 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:56.542 239736 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:56.543 239736 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:24:56 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:56.543 239736 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:24:56 compute-0 nova_compute[189268]: 2025-11-22 08:24:56.946 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.163 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7c9832-dc32-4775-a3f6-63fe7b052f5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.195 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1bdd1d-3b84-4fe1-bd0c-546f2bd406b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 NetworkManager[56326]: <info>  [1763799897.1972] manager: (tap02517cc7-80): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Nov 22 08:24:57 compute-0 systemd-udevd[239767]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.233 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[c14687f0-f3b9-4118-ae72-3fc3f165eb8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.239 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b61d30-23d1-4e32-a90b-84a53b75e965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 NetworkManager[56326]: <info>  [1763799897.2679] device (tap02517cc7-80): carrier: link connected
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.275 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[0647970d-c2d2-4f22-88d8-261a0811f8a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.293 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a57be5c5-f6f9-4e04-8d57-cc1786b5ef0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 38920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239785, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.311 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a28102ae-190d-497b-b76a-76c07ebe3762]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:865a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501085, 'tstamp': 501085}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239786, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.327 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7c410824-25b6-4f68-829c-47ccd5f636a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 38920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239787, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.362 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd8583a-381c-46b3-a9ae-e18c90b8c91f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.429 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[10d03fdc-2298-4847-b2ac-847bd72b3c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.432 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.432 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.433 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:24:57 compute-0 nova_compute[189268]: 2025-11-22 08:24:57.439 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:57 compute-0 kernel: tap02517cc7-80: entered promiscuous mode
Nov 22 08:24:57 compute-0 NetworkManager[56326]: <info>  [1763799897.4408] manager: (tap02517cc7-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.445 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:24:57 compute-0 ovn_controller[97783]: 2025-11-22T08:24:57Z|00031|binding|INFO|Releasing lport 5e2a8859-83a6-4000-bcad-5571f3c7bd5d from this chassis (sb_readonly=0)
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.451 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02517cc7-8060-4764-b9b0-b1d7f59e3ae8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02517cc7-8060-4764-b9b0-b1d7f59e3ae8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.453 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdfa608-29f8-4c23-a0ad-ed37d8411fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.454 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/02517cc7-8060-4764-b9b0-b1d7f59e3ae8.pid.haproxy
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:24:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:24:57.457 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'env', 'PROCESS_TAG=haproxy-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02517cc7-8060-4764-b9b0-b1d7f59e3ae8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:24:57 compute-0 nova_compute[189268]: 2025-11-22 08:24:57.473 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:24:57 compute-0 podman[239818]: 2025-11-22 08:24:57.951176786 +0000 UTC m=+0.074669298 container create 2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:24:57 compute-0 systemd[1]: Started libpod-conmon-2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a.scope.
Nov 22 08:24:58 compute-0 podman[239818]: 2025-11-22 08:24:57.913338014 +0000 UTC m=+0.036830546 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:24:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/103cabd7f9946853411e67f45e0950d49d6f60b5772dc4aec63ec1a344d6c3d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:24:58 compute-0 podman[239818]: 2025-11-22 08:24:58.061479388 +0000 UTC m=+0.184971920 container init 2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:24:58 compute-0 podman[239818]: 2025-11-22 08:24:58.069907976 +0000 UTC m=+0.193400478 container start 2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 08:24:58 compute-0 neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8[239833]: [NOTICE]   (239837) : New worker (239839) forked
Nov 22 08:24:58 compute-0 neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8[239833]: [NOTICE]   (239837) : Loading success.
Nov 22 08:24:59 compute-0 podman[203476]: time="2025-11-22T08:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:24:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:24:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4768 "" "Go-http-client/1.1"
Nov 22 08:25:01 compute-0 openstack_network_exporter[205661]: ERROR   08:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:25:01 compute-0 openstack_network_exporter[205661]: ERROR   08:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:25:01 compute-0 openstack_network_exporter[205661]: ERROR   08:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:25:01 compute-0 openstack_network_exporter[205661]: ERROR   08:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:25:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:25:01 compute-0 openstack_network_exporter[205661]: ERROR   08:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:25:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:25:01 compute-0 nova_compute[189268]: 2025-11-22 08:25:01.534 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:01 compute-0 nova_compute[189268]: 2025-11-22 08:25:01.949 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:03 compute-0 podman[239848]: 2025-11-22 08:25:03.136112953 +0000 UTC m=+0.092335906 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm)
Nov 22 08:25:03 compute-0 podman[239849]: 2025-11-22 08:25:03.151644123 +0000 UTC m=+0.093539828 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 08:25:03 compute-0 ovn_controller[97783]: 2025-11-22T08:25:03Z|00032|binding|INFO|Releasing lport 5e2a8859-83a6-4000-bcad-5571f3c7bd5d from this chassis (sb_readonly=0)
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7242] manager: (patch-br-int-to-provnet-4626db62-a226-41d4-b94f-04168db037c0): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7268] device (patch-br-int-to-provnet-4626db62-a226-41d4-b94f-04168db037c0)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:25:03 compute-0 nova_compute[189268]: 2025-11-22 08:25:03.730 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7323] manager: (patch-provnet-4626db62-a226-41d4-b94f-04168db037c0-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7349] device (patch-provnet-4626db62-a226-41d4-b94f-04168db037c0-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7400] manager: (patch-br-int-to-provnet-4626db62-a226-41d4-b94f-04168db037c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7433] manager: (patch-provnet-4626db62-a226-41d4-b94f-04168db037c0-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7458] device (patch-br-int-to-provnet-4626db62-a226-41d4-b94f-04168db037c0)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 08:25:03 compute-0 NetworkManager[56326]: <info>  [1763799903.7483] device (patch-provnet-4626db62-a226-41d4-b94f-04168db037c0-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 08:25:03 compute-0 ovn_controller[97783]: 2025-11-22T08:25:03Z|00033|binding|INFO|Releasing lport 5e2a8859-83a6-4000-bcad-5571f3c7bd5d from this chassis (sb_readonly=0)
Nov 22 08:25:03 compute-0 nova_compute[189268]: 2025-11-22 08:25:03.758 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:03 compute-0 nova_compute[189268]: 2025-11-22 08:25:03.767 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:05 compute-0 nova_compute[189268]: 2025-11-22 08:25:05.259 189273 DEBUG nova.compute.manager [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-changed-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:25:05 compute-0 nova_compute[189268]: 2025-11-22 08:25:05.260 189273 DEBUG nova.compute.manager [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Refreshing instance network info cache due to event network-changed-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:25:05 compute-0 nova_compute[189268]: 2025-11-22 08:25:05.261 189273 DEBUG oslo_concurrency.lockutils [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:25:05 compute-0 nova_compute[189268]: 2025-11-22 08:25:05.261 189273 DEBUG oslo_concurrency.lockutils [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:25:05 compute-0 nova_compute[189268]: 2025-11-22 08:25:05.262 189273 DEBUG nova.network.neutron [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Refreshing network info cache for port 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:25:06 compute-0 podman[239888]: 2025-11-22 08:25:06.134591097 +0000 UTC m=+0.086393037 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc.)
Nov 22 08:25:06 compute-0 podman[239889]: 2025-11-22 08:25:06.206922452 +0000 UTC m=+0.155365090 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 08:25:06 compute-0 nova_compute[189268]: 2025-11-22 08:25:06.537 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:06 compute-0 nova_compute[189268]: 2025-11-22 08:25:06.952 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:07 compute-0 nova_compute[189268]: 2025-11-22 08:25:07.732 189273 DEBUG nova.network.neutron [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated VIF entry in instance network info cache for port 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:25:07 compute-0 nova_compute[189268]: 2025-11-22 08:25:07.733 189273 DEBUG nova.network.neutron [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:25:07 compute-0 nova_compute[189268]: 2025-11-22 08:25:07.873 189273 DEBUG oslo_concurrency.lockutils [req-310c3f03-d8f1-4233-bdfd-6a98b68515ff req-7b23873c-dec6-4727-929c-5b8fd83a3280 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:25:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:09.957 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:09.957 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:09.958 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:11 compute-0 nova_compute[189268]: 2025-11-22 08:25:11.538 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:11 compute-0 nova_compute[189268]: 2025-11-22 08:25:11.954 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:12 compute-0 podman[239931]: 2025-11-22 08:25:12.119049535 +0000 UTC m=+0.076204361 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9)
Nov 22 08:25:14 compute-0 podman[239952]: 2025-11-22 08:25:14.114261511 +0000 UTC m=+0.069712925 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:25:16 compute-0 nova_compute[189268]: 2025-11-22 08:25:16.541 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:16 compute-0 nova_compute[189268]: 2025-11-22 08:25:16.958 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:17 compute-0 nova_compute[189268]: 2025-11-22 08:25:17.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:17 compute-0 nova_compute[189268]: 2025-11-22 08:25:17.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:25:17 compute-0 nova_compute[189268]: 2025-11-22 08:25:17.364 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:17 compute-0 nova_compute[189268]: 2025-11-22 08:25:17.381 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 08:25:17 compute-0 nova_compute[189268]: 2025-11-22 08:25:17.382 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:17 compute-0 nova_compute[189268]: 2025-11-22 08:25:17.383 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:17 compute-0 nova_compute[189268]: 2025-11-22 08:25:17.420 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:18 compute-0 nova_compute[189268]: 2025-11-22 08:25:18.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:18 compute-0 nova_compute[189268]: 2025-11-22 08:25:18.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:25:18 compute-0 nova_compute[189268]: 2025-11-22 08:25:18.114 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:25:20 compute-0 nova_compute[189268]: 2025-11-22 08:25:20.115 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:20 compute-0 nova_compute[189268]: 2025-11-22 08:25:20.116 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:25:20 compute-0 nova_compute[189268]: 2025-11-22 08:25:20.117 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:25:20 compute-0 nova_compute[189268]: 2025-11-22 08:25:20.565 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:25:20 compute-0 nova_compute[189268]: 2025-11-22 08:25:20.565 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:25:20 compute-0 nova_compute[189268]: 2025-11-22 08:25:20.566 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:25:20 compute-0 nova_compute[189268]: 2025-11-22 08:25:20.566 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:25:21 compute-0 nova_compute[189268]: 2025-11-22 08:25:21.543 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:21 compute-0 nova_compute[189268]: 2025-11-22 08:25:21.588 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:25:21 compute-0 nova_compute[189268]: 2025-11-22 08:25:21.601 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:25:21 compute-0 nova_compute[189268]: 2025-11-22 08:25:21.602 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:25:21 compute-0 nova_compute[189268]: 2025-11-22 08:25:21.603 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:21 compute-0 nova_compute[189268]: 2025-11-22 08:25:21.603 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:21 compute-0 nova_compute[189268]: 2025-11-22 08:25:21.961 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:23 compute-0 nova_compute[189268]: 2025-11-22 08:25:23.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:23 compute-0 nova_compute[189268]: 2025-11-22 08:25:23.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:24 compute-0 nova_compute[189268]: 2025-11-22 08:25:24.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:24 compute-0 nova_compute[189268]: 2025-11-22 08:25:24.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:25:25 compute-0 nova_compute[189268]: 2025-11-22 08:25:25.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:25 compute-0 nova_compute[189268]: 2025-11-22 08:25:25.109 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:26 compute-0 nova_compute[189268]: 2025-11-22 08:25:26.106 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:26 compute-0 nova_compute[189268]: 2025-11-22 08:25:26.106 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:26 compute-0 podman[239977]: 2025-11-22 08:25:26.127618639 +0000 UTC m=+0.072500651 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 08:25:26 compute-0 podman[239979]: 2025-11-22 08:25:26.175192085 +0000 UTC m=+0.108686008 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 08:25:26 compute-0 podman[239978]: 2025-11-22 08:25:26.198582287 +0000 UTC m=+0.138215787 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:25:26 compute-0 nova_compute[189268]: 2025-11-22 08:25:26.547 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:26 compute-0 nova_compute[189268]: 2025-11-22 08:25:26.966 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.215 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.275 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.277 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.346 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.347 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.436 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.437 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.497 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.888 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.890 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5317MB free_disk=72.52542114257812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.891 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:27 compute-0 nova_compute[189268]: 2025-11-22 08:25:27.891 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.093 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.095 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.096 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.150 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.227 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.228 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.247 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.266 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.308 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.348 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updated inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.349 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.350 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.366 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:25:28 compute-0 nova_compute[189268]: 2025-11-22 08:25:28.367 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:29 compute-0 podman[203476]: time="2025-11-22T08:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:25:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:25:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Nov 22 08:25:31 compute-0 openstack_network_exporter[205661]: ERROR   08:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:25:31 compute-0 openstack_network_exporter[205661]: ERROR   08:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:25:31 compute-0 openstack_network_exporter[205661]: ERROR   08:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:25:31 compute-0 openstack_network_exporter[205661]: ERROR   08:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:25:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:25:31 compute-0 openstack_network_exporter[205661]: ERROR   08:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:25:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:25:31 compute-0 ovn_controller[97783]: 2025-11-22T08:25:31Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:4a:5d 192.168.0.53
Nov 22 08:25:31 compute-0 ovn_controller[97783]: 2025-11-22T08:25:31Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:4a:5d 192.168.0.53
Nov 22 08:25:31 compute-0 nova_compute[189268]: 2025-11-22 08:25:31.547 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:31 compute-0 nova_compute[189268]: 2025-11-22 08:25:31.968 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:33 compute-0 ovn_controller[97783]: 2025-11-22T08:25:33Z|00034|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 08:25:33 compute-0 podman[240064]: 2025-11-22 08:25:33.818772035 +0000 UTC m=+0.064286989 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 08:25:33 compute-0 podman[240063]: 2025-11-22 08:25:33.842064805 +0000 UTC m=+0.089847550 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 22 08:25:36 compute-0 nova_compute[189268]: 2025-11-22 08:25:36.549 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:36 compute-0 nova_compute[189268]: 2025-11-22 08:25:36.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:37 compute-0 podman[240104]: 2025-11-22 08:25:37.128663154 +0000 UTC m=+0.071938985 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 22 08:25:37 compute-0 podman[240105]: 2025-11-22 08:25:37.184782601 +0000 UTC m=+0.124499777 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:25:41 compute-0 nova_compute[189268]: 2025-11-22 08:25:41.551 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:41 compute-0 nova_compute[189268]: 2025-11-22 08:25:41.973 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:43 compute-0 podman[240145]: 2025-11-22 08:25:43.123734058 +0000 UTC m=+0.079795528 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Nov 22 08:25:44 compute-0 podman[240166]: 2025-11-22 08:25:44.734320578 +0000 UTC m=+0.066371584 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:25:46 compute-0 nova_compute[189268]: 2025-11-22 08:25:46.553 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:46 compute-0 nova_compute[189268]: 2025-11-22 08:25:46.975 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:50.090 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:25:50 compute-0 nova_compute[189268]: 2025-11-22 08:25:50.091 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:50.092 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:25:51 compute-0 nova_compute[189268]: 2025-11-22 08:25:51.556 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:51 compute-0 nova_compute[189268]: 2025-11-22 08:25:51.978 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.257 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.258 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.272 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.340 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.340 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.349 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.349 189273 INFO nova.compute.claims [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.459 189273 DEBUG nova.compute.provider_tree [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.474 189273 DEBUG nova.scheduler.client.report [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.490 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.491 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.541 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.542 189273 DEBUG nova.network.neutron [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.563 189273 INFO nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.599 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.682 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.684 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.684 189273 INFO nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Creating image(s)
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.685 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.686 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.687 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.699 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.757 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.758 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.759 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.773 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.830 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.831 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.880 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.882 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.882 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.940 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.942 189273 DEBUG nova.virt.disk.api [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Checking if we can resize image /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:25:53 compute-0 nova_compute[189268]: 2025-11-22 08:25:53.942 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.002 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.003 189273 DEBUG nova.virt.disk.api [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Cannot resize image /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.004 189273 DEBUG nova.objects.instance [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'migration_context' on Instance uuid a8349cde-3de3-4359-9fba-8d329cab9476 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.022 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.023 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.024 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.045 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.103 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.104 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.105 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.120 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.180 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.181 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.220 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.221 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.222 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.293 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.294 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.295 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Ensure instance console log exists: /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.296 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.296 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:54 compute-0 nova_compute[189268]: 2025-11-22 08:25:54.296 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:55 compute-0 nova_compute[189268]: 2025-11-22 08:25:55.582 189273 DEBUG nova.network.neutron [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Successfully updated port: c99bd243-1114-4104-8d75-dd481789f958 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:25:55 compute-0 nova_compute[189268]: 2025-11-22 08:25:55.601 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:25:55 compute-0 nova_compute[189268]: 2025-11-22 08:25:55.601 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquired lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:25:55 compute-0 nova_compute[189268]: 2025-11-22 08:25:55.601 189273 DEBUG nova.network.neutron [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:25:55 compute-0 nova_compute[189268]: 2025-11-22 08:25:55.686 189273 DEBUG nova.compute.manager [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received event network-changed-c99bd243-1114-4104-8d75-dd481789f958 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:25:55 compute-0 nova_compute[189268]: 2025-11-22 08:25:55.687 189273 DEBUG nova.compute.manager [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Refreshing instance network info cache due to event network-changed-c99bd243-1114-4104-8d75-dd481789f958. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:25:55 compute-0 nova_compute[189268]: 2025-11-22 08:25:55.687 189273 DEBUG oslo_concurrency.lockutils [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:25:56 compute-0 nova_compute[189268]: 2025-11-22 08:25:56.537 189273 DEBUG nova.network.neutron [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:25:56 compute-0 nova_compute[189268]: 2025-11-22 08:25:56.558 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:56 compute-0 nova_compute[189268]: 2025-11-22 08:25:56.981 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:57 compute-0 podman[240218]: 2025-11-22 08:25:57.12701057 +0000 UTC m=+0.066756224 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:25:57 compute-0 podman[240216]: 2025-11-22 08:25:57.138335346 +0000 UTC m=+0.091178685 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:25:57 compute-0 podman[240217]: 2025-11-22 08:25:57.146786525 +0000 UTC m=+0.094496615 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.751 189273 DEBUG nova.network.neutron [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.780 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Releasing lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.781 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Instance network_info: |[{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.781 189273 DEBUG oslo_concurrency.lockutils [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.782 189273 DEBUG nova.network.neutron [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Refreshing network info cache for port c99bd243-1114-4104-8d75-dd481789f958 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.786 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Start _get_guest_xml network_info=[{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}], 'ephemerals': [{'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1, 'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.795 189273 WARNING nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.807 189273 DEBUG nova.virt.libvirt.host [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.808 189273 DEBUG nova.virt.libvirt.host [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.813 189273 DEBUG nova.virt.libvirt.host [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.813 189273 DEBUG nova.virt.libvirt.host [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.814 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.814 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:23:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='796e25a8-f28d-499e-b2fb-dfae32f0eed7',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.815 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.815 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.815 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.816 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.816 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.816 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.817 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.817 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.817 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.818 189273 DEBUG nova.virt.hardware [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.822 189273 DEBUG nova.virt.libvirt.vif [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q',id=2,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-oztih3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:25:53Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDgwMTA0NzU2OTU4MTEwNzg3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 22 08:25:58 compute-0 nova_compute[189268]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDgwMTA0NzU2OTU4MTEwNzg3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=a8349cde-3de3-4359-9fba-8d329cab9476,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.823 189273 DEBUG nova.network.os_vif_util [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.824 189273 DEBUG nova.network.os_vif_util [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fd:a4,bridge_name='br-int',has_traffic_filtering=True,id=c99bd243-1114-4104-8d75-dd481789f958,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc99bd243-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.825 189273 DEBUG nova.objects.instance [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8349cde-3de3-4359-9fba-8d329cab9476 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.836 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <uuid>a8349cde-3de3-4359-9fba-8d329cab9476</uuid>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <name>instance-00000002</name>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <memory>524288</memory>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <nova:name>vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q</nova:name>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:25:58</nova:creationTime>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <nova:flavor name="m1.small">
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:memory>512</nova:memory>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:ephemeral>1</nova:ephemeral>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:user uuid="27ed1dd009ad4e29863ab5e3a9826c94">admin</nova:user>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:project uuid="80e46844b3824928a6138235e5ede512">admin</nova:project>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="de9f57cf-28b4-4cbd-b943-19aa098356bf"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         <nova:port uuid="c99bd243-1114-4104-8d75-dd481789f958">
Nov 22 08:25:58 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="192.168.0.99" ipVersion="4"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <system>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <entry name="serial">a8349cde-3de3-4359-9fba-8d329cab9476</entry>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <entry name="uuid">a8349cde-3de3-4359-9fba-8d329cab9476</entry>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </system>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <os>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   </os>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <features>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   </features>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <target dev="vdb" bus="virtio"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.config"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:2a:fd:a4"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <target dev="tapc99bd243-11"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/console.log" append="off"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <video>
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </video>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:25:58 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:25:58 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:25:58 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:25:58 compute-0 nova_compute[189268]: </domain>
Nov 22 08:25:58 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.837 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Preparing to wait for external event network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.837 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.838 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.838 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.839 189273 DEBUG nova.virt.libvirt.vif [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q',id=2,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-oztih3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:25:53Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDgwMTA0NzU2OTU4MTEwNzg3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 22 08:25:58 compute-0 nova_compute[189268]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDgwMTA0NzU2OTU4MTEwNzg3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=a8349cde-3de3-4359-9fba-8d329cab9476,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.839 189273 DEBUG nova.network.os_vif_util [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.840 189273 DEBUG nova.network.os_vif_util [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fd:a4,bridge_name='br-int',has_traffic_filtering=True,id=c99bd243-1114-4104-8d75-dd481789f958,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc99bd243-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.840 189273 DEBUG os_vif [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fd:a4,bridge_name='br-int',has_traffic_filtering=True,id=c99bd243-1114-4104-8d75-dd481789f958,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc99bd243-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.840 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.841 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.841 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.844 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.844 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc99bd243-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.844 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc99bd243-11, col_values=(('external_ids', {'iface-id': 'c99bd243-1114-4104-8d75-dd481789f958', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:fd:a4', 'vm-uuid': 'a8349cde-3de3-4359-9fba-8d329cab9476'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.846 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:58 compute-0 NetworkManager[56326]: <info>  [1763799958.8473] manager: (tapc99bd243-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.850 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.855 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.855 189273 INFO os_vif [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fd:a4,bridge_name='br-int',has_traffic_filtering=True,id=c99bd243-1114-4104-8d75-dd481789f958,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc99bd243-11')
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.899 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.900 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.900 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.900 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No VIF found with MAC fa:16:3e:2a:fd:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:25:58 compute-0 nova_compute[189268]: 2025-11-22 08:25:58.901 189273 INFO nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Using config drive
Nov 22 08:25:59 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:25:58.822 189273 DEBUG nova.virt.libvirt.vif [None req-a8096984-a0 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:25:59 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:25:58.839 189273 DEBUG nova.virt.libvirt.vif [None req-a8096984-a0 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.225 189273 INFO nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Creating config drive at /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.config
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.230 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptilu8psh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.360 189273 DEBUG oslo_concurrency.processutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptilu8psh" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:25:59 compute-0 kernel: tapc99bd243-11: entered promiscuous mode
Nov 22 08:25:59 compute-0 NetworkManager[56326]: <info>  [1763799959.4502] manager: (tapc99bd243-11): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 22 08:25:59 compute-0 ovn_controller[97783]: 2025-11-22T08:25:59Z|00035|binding|INFO|Claiming lport c99bd243-1114-4104-8d75-dd481789f958 for this chassis.
Nov 22 08:25:59 compute-0 ovn_controller[97783]: 2025-11-22T08:25:59Z|00036|binding|INFO|c99bd243-1114-4104-8d75-dd481789f958: Claiming fa:16:3e:2a:fd:a4 192.168.0.99
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.455 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.463 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:fd:a4 192.168.0.99'], port_security=['fa:16:3e:2a:fd:a4 192.168.0.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-whvy4btuikeu-vmbwmtq4hym4-port-ql5olvunn5or', 'neutron:cidrs': '192.168.0.99/24', 'neutron:device_id': 'a8349cde-3de3-4359-9fba-8d329cab9476', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-whvy4btuikeu-vmbwmtq4hym4-port-ql5olvunn5or', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=c99bd243-1114-4104-8d75-dd481789f958) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.465 106642 INFO neutron.agent.ovn.metadata.agent [-] Port c99bd243-1114-4104-8d75-dd481789f958 in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 bound to our chassis
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.466 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.470 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:59 compute-0 ovn_controller[97783]: 2025-11-22T08:25:59Z|00037|binding|INFO|Setting lport c99bd243-1114-4104-8d75-dd481789f958 ovn-installed in OVS
Nov 22 08:25:59 compute-0 ovn_controller[97783]: 2025-11-22T08:25:59Z|00038|binding|INFO|Setting lport c99bd243-1114-4104-8d75-dd481789f958 up in Southbound
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.475 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.484 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[648f2b72-da34-459d-84ff-579ebae1782c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:25:59 compute-0 systemd-udevd[240295]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:25:59 compute-0 systemd-machined[155703]: New machine qemu-2-instance-00000002.
Nov 22 08:25:59 compute-0 NetworkManager[56326]: <info>  [1763799959.5081] device (tapc99bd243-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:25:59 compute-0 NetworkManager[56326]: <info>  [1763799959.5126] device (tapc99bd243-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.513 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[2a360d90-3776-48fd-96d8-bff5ef33d565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:25:59 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.516 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[116d9de5-652f-4dc2-ab32-da7484f86631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.544 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[0add2f9e-2cae-4163-ad75-770067297909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.566 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[644536ee-cb8c-4987-950b-bb9f578b73a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 38920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240303, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.582 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[207a28cd-7060-453b-a070-8d62f491cccc]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240309, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240309, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.584 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.586 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.587 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.587 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.588 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.588 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:25:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:25:59.588 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:25:59 compute-0 podman[203476]: time="2025-11-22T08:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:25:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:25:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.872 189273 DEBUG nova.network.neutron [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updated VIF entry in instance network info cache for port c99bd243-1114-4104-8d75-dd481789f958. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.873 189273 DEBUG nova.network.neutron [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:25:59 compute-0 nova_compute[189268]: 2025-11-22 08:25:59.887 189273 DEBUG oslo_concurrency.lockutils [req-c85503ca-2566-4f1b-bf7e-f9207b4fc522 req-860d7d62-414c-4a21-b905-be8b858ff262 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.027 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763799960.0268748, a8349cde-3de3-4359-9fba-8d329cab9476 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.027 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] VM Started (Lifecycle Event)
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.034 189273 DEBUG nova.compute.manager [req-ef9e836e-256e-4450-a0f9-ddbc20b6aeb4 req-a83596fb-0145-4e71-a525-1f10789e4cf2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received event network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.034 189273 DEBUG oslo_concurrency.lockutils [req-ef9e836e-256e-4450-a0f9-ddbc20b6aeb4 req-a83596fb-0145-4e71-a525-1f10789e4cf2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.034 189273 DEBUG oslo_concurrency.lockutils [req-ef9e836e-256e-4450-a0f9-ddbc20b6aeb4 req-a83596fb-0145-4e71-a525-1f10789e4cf2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.035 189273 DEBUG oslo_concurrency.lockutils [req-ef9e836e-256e-4450-a0f9-ddbc20b6aeb4 req-a83596fb-0145-4e71-a525-1f10789e4cf2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.035 189273 DEBUG nova.compute.manager [req-ef9e836e-256e-4450-a0f9-ddbc20b6aeb4 req-a83596fb-0145-4e71-a525-1f10789e4cf2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Processing event network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.036 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.042 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.045 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.051 189273 INFO nova.virt.libvirt.driver [-] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Instance spawned successfully.
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.052 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.054 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.079 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.080 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.081 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.081 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.082 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.082 189273 DEBUG nova.virt.libvirt.driver [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:26:00 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:26:00.094 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.119 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.119 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763799960.0270143, a8349cde-3de3-4359-9fba-8d329cab9476 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.120 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] VM Paused (Lifecycle Event)
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.143 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.150 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763799960.0393915, a8349cde-3de3-4359-9fba-8d329cab9476 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.150 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] VM Resumed (Lifecycle Event)
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.154 189273 INFO nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Took 6.47 seconds to spawn the instance on the hypervisor.
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.155 189273 DEBUG nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.165 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.169 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.189 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.216 189273 INFO nova.compute.manager [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Took 6.90 seconds to build instance.
Nov 22 08:26:00 compute-0 nova_compute[189268]: 2025-11-22 08:26:00.232 189273 DEBUG oslo_concurrency.lockutils [None req-a8096984-a0d1-4ff1-9b66-2855a501e08f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:26:01 compute-0 anacron[50953]: Job `cron.weekly' started
Nov 22 08:26:01 compute-0 anacron[50953]: Job `cron.weekly' terminated
Nov 22 08:26:01 compute-0 openstack_network_exporter[205661]: ERROR   08:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:26:01 compute-0 openstack_network_exporter[205661]: ERROR   08:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:26:01 compute-0 openstack_network_exporter[205661]: ERROR   08:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:26:01 compute-0 openstack_network_exporter[205661]: ERROR   08:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:26:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:26:01 compute-0 openstack_network_exporter[205661]: ERROR   08:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:26:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:26:01 compute-0 nova_compute[189268]: 2025-11-22 08:26:01.562 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:02 compute-0 nova_compute[189268]: 2025-11-22 08:26:02.103 189273 DEBUG nova.compute.manager [req-ed82f4f2-05f5-4962-886e-cefa37aab9f3 req-89a38d6e-bb77-459a-9ff4-7d3554fed2c0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received event network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:26:02 compute-0 nova_compute[189268]: 2025-11-22 08:26:02.104 189273 DEBUG oslo_concurrency.lockutils [req-ed82f4f2-05f5-4962-886e-cefa37aab9f3 req-89a38d6e-bb77-459a-9ff4-7d3554fed2c0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:26:02 compute-0 nova_compute[189268]: 2025-11-22 08:26:02.104 189273 DEBUG oslo_concurrency.lockutils [req-ed82f4f2-05f5-4962-886e-cefa37aab9f3 req-89a38d6e-bb77-459a-9ff4-7d3554fed2c0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:26:02 compute-0 nova_compute[189268]: 2025-11-22 08:26:02.104 189273 DEBUG oslo_concurrency.lockutils [req-ed82f4f2-05f5-4962-886e-cefa37aab9f3 req-89a38d6e-bb77-459a-9ff4-7d3554fed2c0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:26:02 compute-0 nova_compute[189268]: 2025-11-22 08:26:02.105 189273 DEBUG nova.compute.manager [req-ed82f4f2-05f5-4962-886e-cefa37aab9f3 req-89a38d6e-bb77-459a-9ff4-7d3554fed2c0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] No waiting events found dispatching network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:26:02 compute-0 nova_compute[189268]: 2025-11-22 08:26:02.105 189273 WARNING nova.compute.manager [req-ed82f4f2-05f5-4962-886e-cefa37aab9f3 req-89a38d6e-bb77-459a-9ff4-7d3554fed2c0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received unexpected event network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 for instance with vm_state active and task_state None.
Nov 22 08:26:03 compute-0 nova_compute[189268]: 2025-11-22 08:26:03.847 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:04 compute-0 podman[240322]: 2025-11-22 08:26:04.119576685 +0000 UTC m=+0.070488755 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:26:04 compute-0 podman[240323]: 2025-11-22 08:26:04.128345023 +0000 UTC m=+0.076549940 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:26:06 compute-0 nova_compute[189268]: 2025-11-22 08:26:06.564 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:08 compute-0 podman[240359]: 2025-11-22 08:26:08.161419389 +0000 UTC m=+0.111698710 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 22 08:26:08 compute-0 podman[240360]: 2025-11-22 08:26:08.185289604 +0000 UTC m=+0.131318231 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:26:08 compute-0 nova_compute[189268]: 2025-11-22 08:26:08.850 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:26:09.958 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:26:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:26:09.958 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:26:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:26:09.959 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:26:11 compute-0 nova_compute[189268]: 2025-11-22 08:26:11.566 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:13 compute-0 nova_compute[189268]: 2025-11-22 08:26:13.853 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:14 compute-0 podman[240405]: 2025-11-22 08:26:14.146149975 +0000 UTC m=+0.099798729 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Nov 22 08:26:15 compute-0 podman[240425]: 2025-11-22 08:26:15.111616529 +0000 UTC m=+0.061586955 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:26:16 compute-0 nova_compute[189268]: 2025-11-22 08:26:16.568 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:18 compute-0 nova_compute[189268]: 2025-11-22 08:26:18.856 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:21 compute-0 nova_compute[189268]: 2025-11-22 08:26:21.570 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.089 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.089 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.089 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.090 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.095 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:26:22 compute-0 nova_compute[189268]: 2025-11-22 08:26:22.367 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:22 compute-0 nova_compute[189268]: 2025-11-22 08:26:22.368 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:26:22 compute-0 nova_compute[189268]: 2025-11-22 08:26:22.368 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:26:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:22.466 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:26:23 compute-0 nova_compute[189268]: 2025-11-22 08:26:23.589 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:26:23 compute-0 nova_compute[189268]: 2025-11-22 08:26:23.589 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:26:23 compute-0 nova_compute[189268]: 2025-11-22 08:26:23.589 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:26:23 compute-0 nova_compute[189268]: 2025-11-22 08:26:23.590 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:26:23 compute-0 nova_compute[189268]: 2025-11-22 08:26:23.859 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:24.603 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Sat, 22 Nov 2025 08:26:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3f17a21e-7c86-45d3-8d5e-43a0c3f20942 x-openstack-request-id: req-3f17a21e-7c86-45d3-8d5e-43a0c3f20942 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:26:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:24.603 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66", "name": "test_0", "status": "ACTIVE", "tenant_id": "80e46844b3824928a6138235e5ede512", "user_id": "27ed1dd009ad4e29863ab5e3a9826c94", "metadata": {}, "hostId": "984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4", "image": {"id": "de9f57cf-28b4-4cbd-b943-19aa098356bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de9f57cf-28b4-4cbd-b943-19aa098356bf"}]}, "flavor": {"id": "796e25a8-f28d-499e-b2fb-dfae32f0eed7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/796e25a8-f28d-499e-b2fb-dfae32f0eed7"}]}, "created": "2025-11-22T08:24:42Z", "updated": "2025-11-22T08:24:54Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.53", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:4a:5d"}, {"version": 4, "addr": "192.168.122.224", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:4a:5d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-22T08:24:53.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:26:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:24.604 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 used request id req-3f17a21e-7c86-45d3-8d5e-43a0c3f20942 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:26:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:24.607 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:26:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:24.611 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a8349cde-3de3-4359-9fba-8d329cab9476 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:26:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:24.613 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a8349cde-3de3-4359-9fba-8d329cab9476 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.110 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Sat, 22 Nov 2025 08:26:24 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-30979d8a-6e74-4983-a750-19603ed52ff5 x-openstack-request-id: req-30979d8a-6e74-4983-a750-19603ed52ff5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.110 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a8349cde-3de3-4359-9fba-8d329cab9476", "name": "vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q", "status": "ACTIVE", "tenant_id": "80e46844b3824928a6138235e5ede512", "user_id": "27ed1dd009ad4e29863ab5e3a9826c94", "metadata": {"metering.server_group": "209b9e59-811e-4c2b-a756-c29ba92c4b5c"}, "hostId": "984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4", "image": {"id": "de9f57cf-28b4-4cbd-b943-19aa098356bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de9f57cf-28b4-4cbd-b943-19aa098356bf"}]}, "flavor": {"id": "796e25a8-f28d-499e-b2fb-dfae32f0eed7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/796e25a8-f28d-499e-b2fb-dfae32f0eed7"}]}, "created": "2025-11-22T08:25:52Z", "updated": "2025-11-22T08:26:00Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.99", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:2a:fd:a4"}, {"version": 4, "addr": "192.168.122.200", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:2a:fd:a4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a8349cde-3de3-4359-9fba-8d329cab9476"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a8349cde-3de3-4359-9fba-8d329cab9476"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-22T08:26:00.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.110 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a8349cde-3de3-4359-9fba-8d329cab9476 used request id req-30979d8a-6e74-4983-a750-19603ed52ff5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.111 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8349cde-3de3-4359-9fba-8d329cab9476', 'name': 'vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.112 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.112 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.112 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.113 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:26:25.112406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.118 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 / tap4645bc8c-a8 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.118 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.122 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a8349cde-3de3-4359-9fba-8d329cab9476 / tapc99bd243-11 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.122 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.123 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.123 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.124 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.124 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.124 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.124 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.124 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:26:25.124231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:26:25.125589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.126 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:26:25.126700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.127 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:26:25.127886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.150 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 36390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.175 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/cpu volume: 24410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.176 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.176 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.176 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.176 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.176 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.177 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.177 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.177 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:26:25.177011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.177 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.178 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.178 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.178 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.178 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:26:25.178318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.202 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.203 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.203 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.223 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.224 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.225 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.225 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.226 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.226 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.226 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.226 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.226 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:26:25.226705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.294 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.295 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.296 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.379 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.380 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.380 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.381 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.381 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.381 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.381 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.381 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.382 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.382 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:26:25.381902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.382 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.383 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.383 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.383 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.383 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.384 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.384 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.384 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.384 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.384 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.384 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:26:25.384559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.385 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:26:25.385876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.386 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.386 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q>]
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.388 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.389 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.389 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.389 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.390 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.390 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:26:25.389867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.391 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.391 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.391 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 737106484 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.391 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.391 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 3185877 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.392 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.392 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.392 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.392 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.392 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.393 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.393 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.393 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:26:25.393009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:26:25.394549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.395 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.395 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.395 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.395 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.396 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.396 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.396 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.396 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.397 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.397 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.397 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.397 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:26:25.397161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.397 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.397 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.398 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.398 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.398 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.398 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.399 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.399 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.399 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.399 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.399 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.399 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.400 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.400 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:26:25.399324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.400 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.400 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.401 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.401 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.401 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.401 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.401 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.402 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.402 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.403 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:26:25.402099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.403 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.404 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.404 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.404 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.404 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.405 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.406 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.406 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.406 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.406 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.406 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.407 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.407 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:26:25.406597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.408 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.409 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.409 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.409 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.409 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.410 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:26:25.410059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.411 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.411 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.411 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.411 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.412 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.412 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.412 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.412 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.412 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.412 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:26:25.412768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.413 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.413 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.413 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.413 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.413 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.414 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:26:25.414038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.414 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.414 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.415 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.415 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.415 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.415 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.415 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.415 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:26:25.415672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.416 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.416 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.416 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.417 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.417 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.417 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:26:25.417141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.417 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.417 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.418 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.418 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.418 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.418 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.418 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.418 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:26:25.418686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.419 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.419 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.419 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.419 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.419 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.420 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.420 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.420 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.420 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:26:25.420138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.420 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q>]
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:26:25.421332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.421 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.422 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance a8349cde-3de3-4359-9fba-8d329cab9476: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.422 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.422 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.422 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.422 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.422 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.422 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.423 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:26:25.424 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.702 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.715 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.715 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.716 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.716 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.717 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.717 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:25 compute-0 nova_compute[189268]: 2025-11-22 08:26:25.717 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:26:26 compute-0 nova_compute[189268]: 2025-11-22 08:26:26.572 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:27 compute-0 nova_compute[189268]: 2025-11-22 08:26:27.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:27 compute-0 nova_compute[189268]: 2025-11-22 08:26:27.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:28 compute-0 nova_compute[189268]: 2025-11-22 08:26:28.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:28 compute-0 podman[240452]: 2025-11-22 08:26:28.120786908 +0000 UTC m=+0.074923845 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:26:28 compute-0 podman[240451]: 2025-11-22 08:26:28.125470136 +0000 UTC m=+0.083592031 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 22 08:26:28 compute-0 podman[240453]: 2025-11-22 08:26:28.144201691 +0000 UTC m=+0.096157480 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:26:28 compute-0 nova_compute[189268]: 2025-11-22 08:26:28.862 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.121 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.125 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.216 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.321 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.322 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.384 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.388 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.450 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.451 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 ovn_controller[97783]: 2025-11-22T08:26:29Z|00039|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.514 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.522 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.586 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.588 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.650 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.654 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.717 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.718 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:26:29 compute-0 podman[203476]: time="2025-11-22T08:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:26:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:26:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 22 08:26:29 compute-0 nova_compute[189268]: 2025-11-22 08:26:29.787 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.119 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.120 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=72.50175857543945GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.120 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.121 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.304 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.305 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.305 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.305 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.371 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.387 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.418 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:26:30 compute-0 nova_compute[189268]: 2025-11-22 08:26:30.419 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:26:31 compute-0 openstack_network_exporter[205661]: ERROR   08:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:26:31 compute-0 openstack_network_exporter[205661]: ERROR   08:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:26:31 compute-0 openstack_network_exporter[205661]: ERROR   08:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:26:31 compute-0 openstack_network_exporter[205661]: ERROR   08:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:26:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:26:31 compute-0 openstack_network_exporter[205661]: ERROR   08:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:26:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:26:31 compute-0 nova_compute[189268]: 2025-11-22 08:26:31.575 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:33 compute-0 nova_compute[189268]: 2025-11-22 08:26:33.865 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:35 compute-0 podman[240535]: 2025-11-22 08:26:35.123018782 +0000 UTC m=+0.077617067 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 22 08:26:35 compute-0 podman[240536]: 2025-11-22 08:26:35.137736501 +0000 UTC m=+0.076817586 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:26:36 compute-0 nova_compute[189268]: 2025-11-22 08:26:36.578 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:36 compute-0 ovn_controller[97783]: 2025-11-22T08:26:36Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:fd:a4 192.168.0.99
Nov 22 08:26:36 compute-0 ovn_controller[97783]: 2025-11-22T08:26:36Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:fd:a4 192.168.0.99
Nov 22 08:26:38 compute-0 nova_compute[189268]: 2025-11-22 08:26:38.869 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:39 compute-0 podman[240586]: 2025-11-22 08:26:39.139983605 +0000 UTC m=+0.088444261 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30)
Nov 22 08:26:39 compute-0 podman[240587]: 2025-11-22 08:26:39.163244886 +0000 UTC m=+0.107911049 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 08:26:41 compute-0 nova_compute[189268]: 2025-11-22 08:26:41.581 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:43 compute-0 nova_compute[189268]: 2025-11-22 08:26:43.872 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:44 compute-0 podman[240632]: 2025-11-22 08:26:44.769899303 +0000 UTC m=+0.097180827 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64)
Nov 22 08:26:46 compute-0 podman[240652]: 2025-11-22 08:26:46.102694587 +0000 UTC m=+0.062607379 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:26:46 compute-0 nova_compute[189268]: 2025-11-22 08:26:46.583 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:48 compute-0 nova_compute[189268]: 2025-11-22 08:26:48.874 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:51 compute-0 nova_compute[189268]: 2025-11-22 08:26:51.588 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:53 compute-0 nova_compute[189268]: 2025-11-22 08:26:53.878 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:56 compute-0 nova_compute[189268]: 2025-11-22 08:26:56.590 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:58 compute-0 nova_compute[189268]: 2025-11-22 08:26:58.880 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:26:59 compute-0 podman[240677]: 2025-11-22 08:26:59.101815196 +0000 UTC m=+0.057561512 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:26:59 compute-0 podman[240678]: 2025-11-22 08:26:59.115626991 +0000 UTC m=+0.066456243 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:26:59 compute-0 podman[240676]: 2025-11-22 08:26:59.140178418 +0000 UTC m=+0.099356907 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:26:59 compute-0 podman[203476]: time="2025-11-22T08:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:26:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:26:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Nov 22 08:27:01 compute-0 openstack_network_exporter[205661]: ERROR   08:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:27:01 compute-0 openstack_network_exporter[205661]: ERROR   08:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:27:01 compute-0 openstack_network_exporter[205661]: ERROR   08:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:27:01 compute-0 openstack_network_exporter[205661]: ERROR   08:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:27:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:27:01 compute-0 openstack_network_exporter[205661]: ERROR   08:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:27:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:27:01 compute-0 nova_compute[189268]: 2025-11-22 08:27:01.592 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:03 compute-0 nova_compute[189268]: 2025-11-22 08:27:03.883 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:06 compute-0 podman[240732]: 2025-11-22 08:27:06.119789217 +0000 UTC m=+0.074710809 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:27:06 compute-0 podman[240733]: 2025-11-22 08:27:06.120592468 +0000 UTC m=+0.072466217 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Nov 22 08:27:06 compute-0 nova_compute[189268]: 2025-11-22 08:27:06.593 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:08 compute-0 nova_compute[189268]: 2025-11-22 08:27:08.887 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:27:09.960 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:27:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:27:09.961 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:27:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:27:09.962 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:27:10 compute-0 podman[240777]: 2025-11-22 08:27:10.120987333 +0000 UTC m=+0.074844821 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 22 08:27:10 compute-0 podman[240778]: 2025-11-22 08:27:10.160387533 +0000 UTC m=+0.111587189 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 22 08:27:11 compute-0 nova_compute[189268]: 2025-11-22 08:27:11.597 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:13 compute-0 nova_compute[189268]: 2025-11-22 08:27:13.890 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:15 compute-0 podman[240823]: 2025-11-22 08:27:15.130031207 +0000 UTC m=+0.084513294 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Nov 22 08:27:16 compute-0 nova_compute[189268]: 2025-11-22 08:27:16.598 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:17 compute-0 podman[240843]: 2025-11-22 08:27:17.117016141 +0000 UTC m=+0.069532138 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:27:18 compute-0 nova_compute[189268]: 2025-11-22 08:27:18.892 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:21 compute-0 nova_compute[189268]: 2025-11-22 08:27:21.602 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:22 compute-0 nova_compute[189268]: 2025-11-22 08:27:22.419 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:22 compute-0 nova_compute[189268]: 2025-11-22 08:27:22.420 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:27:23 compute-0 nova_compute[189268]: 2025-11-22 08:27:23.641 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:27:23 compute-0 nova_compute[189268]: 2025-11-22 08:27:23.641 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:27:23 compute-0 nova_compute[189268]: 2025-11-22 08:27:23.641 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:27:23 compute-0 nova_compute[189268]: 2025-11-22 08:27:23.894 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.604 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.643 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.661 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.662 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.662 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.663 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.663 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.663 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:26 compute-0 nova_compute[189268]: 2025-11-22 08:27:26.664 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:27:27 compute-0 nova_compute[189268]: 2025-11-22 08:27:27.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:27 compute-0 nova_compute[189268]: 2025-11-22 08:27:27.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:27 compute-0 nova_compute[189268]: 2025-11-22 08:27:27.116 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:28 compute-0 nova_compute[189268]: 2025-11-22 08:27:28.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:28 compute-0 nova_compute[189268]: 2025-11-22 08:27:28.898 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:29 compute-0 podman[203476]: time="2025-11-22T08:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:27:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:27:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Nov 22 08:27:30 compute-0 podman[240866]: 2025-11-22 08:27:30.125053878 +0000 UTC m=+0.069948509 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:27:30 compute-0 podman[240865]: 2025-11-22 08:27:30.139339365 +0000 UTC m=+0.085170982 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:27:30 compute-0 podman[240867]: 2025-11-22 08:27:30.153734065 +0000 UTC m=+0.096008225 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.121 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.121 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.122 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.201 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.302 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.303 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.368 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.370 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 openstack_network_exporter[205661]: ERROR   08:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:27:31 compute-0 openstack_network_exporter[205661]: ERROR   08:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:27:31 compute-0 openstack_network_exporter[205661]: ERROR   08:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:27:31 compute-0 openstack_network_exporter[205661]: ERROR   08:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:27:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:27:31 compute-0 openstack_network_exporter[205661]: ERROR   08:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:27:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.438 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.441 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.506 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.513 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.579 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.581 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.606 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.648 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.649 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.720 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.721 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:27:31 compute-0 nova_compute[189268]: 2025-11-22 08:27:31.797 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.112 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.113 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=72.48494720458984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.114 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.114 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:27:32 compute-0 sshd-session[240921]: Invalid user oracle from 80.94.92.164 port 52620
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.204 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.205 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.206 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.207 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.272 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.285 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.290 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:27:32 compute-0 nova_compute[189268]: 2025-11-22 08:27:32.290 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:27:32 compute-0 sshd-session[240921]: Connection closed by invalid user oracle 80.94.92.164 port 52620 [preauth]
Nov 22 08:27:33 compute-0 nova_compute[189268]: 2025-11-22 08:27:33.902 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:36 compute-0 nova_compute[189268]: 2025-11-22 08:27:36.608 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:37 compute-0 podman[240948]: 2025-11-22 08:27:37.143771967 +0000 UTC m=+0.092549862 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:27:37 compute-0 podman[240947]: 2025-11-22 08:27:37.162163816 +0000 UTC m=+0.113188322 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Nov 22 08:27:38 compute-0 nova_compute[189268]: 2025-11-22 08:27:38.906 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:41 compute-0 podman[240986]: 2025-11-22 08:27:41.113675093 +0000 UTC m=+0.074378299 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release-0.7.12=, name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 22 08:27:41 compute-0 podman[240987]: 2025-11-22 08:27:41.14010591 +0000 UTC m=+0.096684784 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:27:41 compute-0 nova_compute[189268]: 2025-11-22 08:27:41.611 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:43 compute-0 nova_compute[189268]: 2025-11-22 08:27:43.909 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:46 compute-0 podman[241028]: 2025-11-22 08:27:46.112164858 +0000 UTC m=+0.065429377 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:27:46 compute-0 nova_compute[189268]: 2025-11-22 08:27:46.612 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:48 compute-0 podman[241049]: 2025-11-22 08:27:48.129106285 +0000 UTC m=+0.086852798 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:27:48 compute-0 nova_compute[189268]: 2025-11-22 08:27:48.912 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:51 compute-0 nova_compute[189268]: 2025-11-22 08:27:51.614 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:53 compute-0 nova_compute[189268]: 2025-11-22 08:27:53.915 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:56 compute-0 nova_compute[189268]: 2025-11-22 08:27:56.617 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:58 compute-0 nova_compute[189268]: 2025-11-22 08:27:58.917 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:27:59 compute-0 podman[203476]: time="2025-11-22T08:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:27:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:27:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 22 08:28:01 compute-0 podman[241077]: 2025-11-22 08:28:01.12013112 +0000 UTC m=+0.073287610 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 08:28:01 compute-0 podman[241078]: 2025-11-22 08:28:01.125135286 +0000 UTC m=+0.072933370 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:28:01 compute-0 podman[241079]: 2025-11-22 08:28:01.15403389 +0000 UTC m=+0.098936196 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:28:01 compute-0 openstack_network_exporter[205661]: ERROR   08:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:28:01 compute-0 openstack_network_exporter[205661]: ERROR   08:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:28:01 compute-0 openstack_network_exporter[205661]: ERROR   08:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:28:01 compute-0 openstack_network_exporter[205661]: ERROR   08:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:28:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:28:01 compute-0 openstack_network_exporter[205661]: ERROR   08:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:28:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:28:01 compute-0 nova_compute[189268]: 2025-11-22 08:28:01.619 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:02 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 08:28:03 compute-0 nova_compute[189268]: 2025-11-22 08:28:03.919 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:06 compute-0 nova_compute[189268]: 2025-11-22 08:28:06.622 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:08 compute-0 podman[241141]: 2025-11-22 08:28:08.118717598 +0000 UTC m=+0.072605721 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:28:08 compute-0 podman[241140]: 2025-11-22 08:28:08.146220274 +0000 UTC m=+0.103336425 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 22 08:28:08 compute-0 nova_compute[189268]: 2025-11-22 08:28:08.924 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:28:09.961 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:28:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:28:09.962 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:28:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:28:09.962 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:28:11 compute-0 nova_compute[189268]: 2025-11-22 08:28:11.625 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:12 compute-0 podman[241178]: 2025-11-22 08:28:12.105898844 +0000 UTC m=+0.062507777 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, vcs-type=git, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, config_id=edpm)
Nov 22 08:28:12 compute-0 podman[241179]: 2025-11-22 08:28:12.140648997 +0000 UTC m=+0.092569572 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 08:28:13 compute-0 nova_compute[189268]: 2025-11-22 08:28:13.928 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:16 compute-0 nova_compute[189268]: 2025-11-22 08:28:16.627 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:17 compute-0 podman[241221]: 2025-11-22 08:28:17.117665761 +0000 UTC m=+0.074836321 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Nov 22 08:28:18 compute-0 nova_compute[189268]: 2025-11-22 08:28:18.932 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:19 compute-0 podman[241242]: 2025-11-22 08:28:19.118598304 +0000 UTC m=+0.069991570 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:28:21 compute-0 nova_compute[189268]: 2025-11-22 08:28:21.631 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.089 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.090 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.090 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.098 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.102 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8349cde-3de3-4359-9fba-8d329cab9476', 'name': 'vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.102 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.103 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.103 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.103 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:28:22.103241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.108 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.112 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.112 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.112 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.112 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.113 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.113 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.113 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.113 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.113 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.113 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:28:22.113165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:28:22.114613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.115 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.116 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.116 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.116 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.116 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.116 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.116 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.116 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:28:22.115792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:28:22.116909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.138 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 37590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.162 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/cpu volume: 71930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.163 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.163 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.163 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.163 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.164 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:28:22.163759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.164 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.164 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.164 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.164 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.165 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.165 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.165 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:28:22.165160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.187 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.187 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.187 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.217 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.218 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.218 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.218 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.218 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.218 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.219 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:28:22.219197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.282 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.282 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.283 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 nova_compute[189268]: 2025-11-22 08:28:22.292 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:22 compute-0 nova_compute[189268]: 2025-11-22 08:28:22.292 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:28:22 compute-0 nova_compute[189268]: 2025-11-22 08:28:22.293 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.388 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.388 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.389 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.390 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.391 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.394 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.394 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.395 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.395 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.395 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.395 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.396 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.396 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.397 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.397 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.398 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.398 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.398 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.398 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.398 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.398 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.399 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.399 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.399 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.399 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.399 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.400 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.400 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.400 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.401 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.401 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.401 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.401 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.401 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.402 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 875417919 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.402 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 107543456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.402 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 90621118 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.403 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.403 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.403 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.404 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.404 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.404 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.404 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.404 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes.delta volume: 4759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.405 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.405 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.405 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.405 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.405 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.405 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.405 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.406 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:28:22.395259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:28:22.398819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.407 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:28:22.401098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:28:22.404177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.407 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:28:22.405693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.407 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.408 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.409 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.409 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.409 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.409 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.409 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.409 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.411 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.411 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.412 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.413 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.413 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.414 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.414 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.414 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.414 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.415 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.415 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.416 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.417 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.417 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.418 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.418 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.419 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.419 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.419 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.420 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.420 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.420 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.421 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.421 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.421 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.422 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.422 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.423 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.423 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.424 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.424 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.424 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.424 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.424 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.425 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.425 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.426 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.426 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.426 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.427 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.427 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.427 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.427 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.428 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.429 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 3212925156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.430 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 13984579 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.430 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.431 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.431 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.432 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:28:22.409930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.432 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.432 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:28:22.414918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:28:22.420096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.432 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:28:22.424621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:28:22.427288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.433 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:28:22.432897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.433 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.433 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.433 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.434 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.434 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.434 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.434 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.434 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.435 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.435 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.435 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.435 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:28:22.434119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.435 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.436 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.436 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.436 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.436 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:28:22.435776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.437 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.437 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.437 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.437 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:28:22.437078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.438 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.438 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.438 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.438 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.438 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.438 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.438 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.439 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes.delta volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.439 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.439 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.439 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.439 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.439 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.440 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.440 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.440 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.440 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.440 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.441 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:28:22.438608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.441 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.441 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.442 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.442 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.442 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.442 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:28:22.440269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.442 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.443 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.444 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:28:22.445 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:28:22 compute-0 nova_compute[189268]: 2025-11-22 08:28:22.675 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:28:22 compute-0 nova_compute[189268]: 2025-11-22 08:28:22.676 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:28:22 compute-0 nova_compute[189268]: 2025-11-22 08:28:22.677 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:28:22 compute-0 nova_compute[189268]: 2025-11-22 08:28:22.677 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:28:23 compute-0 nova_compute[189268]: 2025-11-22 08:28:23.694 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:28:23 compute-0 nova_compute[189268]: 2025-11-22 08:28:23.713 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:28:23 compute-0 nova_compute[189268]: 2025-11-22 08:28:23.714 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:28:23 compute-0 nova_compute[189268]: 2025-11-22 08:28:23.715 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:23 compute-0 nova_compute[189268]: 2025-11-22 08:28:23.936 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:24 compute-0 nova_compute[189268]: 2025-11-22 08:28:24.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:25 compute-0 nova_compute[189268]: 2025-11-22 08:28:25.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:26 compute-0 nova_compute[189268]: 2025-11-22 08:28:26.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:26 compute-0 nova_compute[189268]: 2025-11-22 08:28:26.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:28:26 compute-0 nova_compute[189268]: 2025-11-22 08:28:26.633 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:28 compute-0 nova_compute[189268]: 2025-11-22 08:28:28.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:28 compute-0 nova_compute[189268]: 2025-11-22 08:28:28.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:28 compute-0 nova_compute[189268]: 2025-11-22 08:28:28.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:28 compute-0 nova_compute[189268]: 2025-11-22 08:28:28.940 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:29 compute-0 podman[203476]: time="2025-11-22T08:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:28:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:28:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.129 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.130 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.130 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.202 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.265 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.266 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.324 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.326 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.388 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.390 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 openstack_network_exporter[205661]: ERROR   08:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:28:31 compute-0 openstack_network_exporter[205661]: ERROR   08:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:28:31 compute-0 openstack_network_exporter[205661]: ERROR   08:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:28:31 compute-0 openstack_network_exporter[205661]: ERROR   08:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:28:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:28:31 compute-0 openstack_network_exporter[205661]: ERROR   08:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:28:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.467 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.475 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.549 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.550 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.608 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.610 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.635 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.675 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.676 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:28:31 compute-0 nova_compute[189268]: 2025-11-22 08:28:31.737 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.105 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.106 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=72.48494720458984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.107 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.107 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:28:32 compute-0 podman[241297]: 2025-11-22 08:28:32.113616588 +0000 UTC m=+0.070759691 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:28:32 compute-0 podman[241296]: 2025-11-22 08:28:32.130777944 +0000 UTC m=+0.084133914 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:28:32 compute-0 podman[241298]: 2025-11-22 08:28:32.131628776 +0000 UTC m=+0.079139208 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.174 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.174 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.174 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.175 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.219 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.230 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.231 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:28:32 compute-0 nova_compute[189268]: 2025-11-22 08:28:32.232 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:28:33 compute-0 nova_compute[189268]: 2025-11-22 08:28:33.943 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:36 compute-0 nova_compute[189268]: 2025-11-22 08:28:36.640 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:38 compute-0 nova_compute[189268]: 2025-11-22 08:28:38.947 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:39 compute-0 podman[241353]: 2025-11-22 08:28:39.124140875 +0000 UTC m=+0.081949121 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true)
Nov 22 08:28:39 compute-0 podman[241354]: 2025-11-22 08:28:39.131228136 +0000 UTC m=+0.083336478 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:28:41 compute-0 nova_compute[189268]: 2025-11-22 08:28:41.641 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:43 compute-0 podman[241391]: 2025-11-22 08:28:43.153698524 +0000 UTC m=+0.105094786 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 22 08:28:43 compute-0 podman[241392]: 2025-11-22 08:28:43.191917495 +0000 UTC m=+0.139571525 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:28:43 compute-0 nova_compute[189268]: 2025-11-22 08:28:43.950 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:46 compute-0 nova_compute[189268]: 2025-11-22 08:28:46.643 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:48 compute-0 podman[241431]: 2025-11-22 08:28:48.139958912 +0000 UTC m=+0.090074001 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, architecture=x86_64, config_id=edpm, name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 22 08:28:48 compute-0 nova_compute[189268]: 2025-11-22 08:28:48.955 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:50 compute-0 podman[241453]: 2025-11-22 08:28:50.120402146 +0000 UTC m=+0.073376069 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:28:51 compute-0 nova_compute[189268]: 2025-11-22 08:28:51.645 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:53 compute-0 nova_compute[189268]: 2025-11-22 08:28:53.958 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:56 compute-0 nova_compute[189268]: 2025-11-22 08:28:56.647 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:58 compute-0 nova_compute[189268]: 2025-11-22 08:28:58.962 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:28:59 compute-0 podman[203476]: time="2025-11-22T08:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:28:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:28:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 22 08:29:01 compute-0 openstack_network_exporter[205661]: ERROR   08:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:29:01 compute-0 openstack_network_exporter[205661]: ERROR   08:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:29:01 compute-0 openstack_network_exporter[205661]: ERROR   08:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:29:01 compute-0 openstack_network_exporter[205661]: ERROR   08:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:29:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:29:01 compute-0 openstack_network_exporter[205661]: ERROR   08:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:29:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:29:01 compute-0 nova_compute[189268]: 2025-11-22 08:29:01.649 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:03 compute-0 podman[241483]: 2025-11-22 08:29:03.123367268 +0000 UTC m=+0.062687572 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 08:29:03 compute-0 podman[241477]: 2025-11-22 08:29:03.143427458 +0000 UTC m=+0.084212011 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:29:03 compute-0 podman[241476]: 2025-11-22 08:29:03.149296158 +0000 UTC m=+0.100911593 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 22 08:29:03 compute-0 nova_compute[189268]: 2025-11-22 08:29:03.965 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:06 compute-0 nova_compute[189268]: 2025-11-22 08:29:06.652 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:08 compute-0 nova_compute[189268]: 2025-11-22 08:29:08.969 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:29:09.962 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:29:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:29:09.963 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:29:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:29:09.964 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:29:10 compute-0 podman[241533]: 2025-11-22 08:29:10.135986907 +0000 UTC m=+0.091639392 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:29:10 compute-0 podman[241534]: 2025-11-22 08:29:10.14838111 +0000 UTC m=+0.099688568 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:29:11 compute-0 nova_compute[189268]: 2025-11-22 08:29:11.655 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:13 compute-0 nova_compute[189268]: 2025-11-22 08:29:13.972 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:14 compute-0 podman[241572]: 2025-11-22 08:29:14.11115798 +0000 UTC m=+0.068789026 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_id=edpm, com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, architecture=x86_64, version=9.4)
Nov 22 08:29:14 compute-0 podman[241573]: 2025-11-22 08:29:14.141834197 +0000 UTC m=+0.094159201 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:29:16 compute-0 nova_compute[189268]: 2025-11-22 08:29:16.659 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:18 compute-0 nova_compute[189268]: 2025-11-22 08:29:18.975 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:19 compute-0 podman[241616]: 2025-11-22 08:29:19.14018712 +0000 UTC m=+0.095800825 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 08:29:21 compute-0 podman[241637]: 2025-11-22 08:29:21.11901247 +0000 UTC m=+0.072351793 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:29:21 compute-0 nova_compute[189268]: 2025-11-22 08:29:21.661 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:22 compute-0 nova_compute[189268]: 2025-11-22 08:29:22.231 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:22 compute-0 nova_compute[189268]: 2025-11-22 08:29:22.232 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:29:22 compute-0 nova_compute[189268]: 2025-11-22 08:29:22.683 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:29:22 compute-0 nova_compute[189268]: 2025-11-22 08:29:22.684 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:29:22 compute-0 nova_compute[189268]: 2025-11-22 08:29:22.684 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:29:23 compute-0 nova_compute[189268]: 2025-11-22 08:29:23.700 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:29:23 compute-0 nova_compute[189268]: 2025-11-22 08:29:23.713 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:29:23 compute-0 nova_compute[189268]: 2025-11-22 08:29:23.714 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:29:23 compute-0 nova_compute[189268]: 2025-11-22 08:29:23.977 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:24 compute-0 nova_compute[189268]: 2025-11-22 08:29:24.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:26 compute-0 nova_compute[189268]: 2025-11-22 08:29:26.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:26 compute-0 nova_compute[189268]: 2025-11-22 08:29:26.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:26 compute-0 nova_compute[189268]: 2025-11-22 08:29:26.664 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:27 compute-0 nova_compute[189268]: 2025-11-22 08:29:27.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:27 compute-0 nova_compute[189268]: 2025-11-22 08:29:27.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:27 compute-0 nova_compute[189268]: 2025-11-22 08:29:27.113 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:29:28 compute-0 nova_compute[189268]: 2025-11-22 08:29:28.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:28 compute-0 nova_compute[189268]: 2025-11-22 08:29:28.979 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:29 compute-0 nova_compute[189268]: 2025-11-22 08:29:29.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:29 compute-0 podman[203476]: time="2025-11-22T08:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:29:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:29:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 22 08:29:30 compute-0 nova_compute[189268]: 2025-11-22 08:29:30.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:31 compute-0 openstack_network_exporter[205661]: ERROR   08:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:29:31 compute-0 openstack_network_exporter[205661]: ERROR   08:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:29:31 compute-0 openstack_network_exporter[205661]: ERROR   08:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:29:31 compute-0 openstack_network_exporter[205661]: ERROR   08:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:29:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:29:31 compute-0 openstack_network_exporter[205661]: ERROR   08:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:29:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:29:31 compute-0 nova_compute[189268]: 2025-11-22 08:29:31.666 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.214 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.278 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.280 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.337 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.339 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.405 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.410 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.469 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.477 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.543 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.544 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.630 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.631 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.702 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.707 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:29:32 compute-0 nova_compute[189268]: 2025-11-22 08:29:32.776 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.130 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.132 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5076MB free_disk=72.48494720458984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.132 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.132 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.221 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.221 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.221 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.222 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.290 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.301 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.302 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.303 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:29:33 compute-0 nova_compute[189268]: 2025-11-22 08:29:33.984 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:34 compute-0 podman[241686]: 2025-11-22 08:29:34.135635039 +0000 UTC m=+0.080485191 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:29:34 compute-0 podman[241685]: 2025-11-22 08:29:34.14085763 +0000 UTC m=+0.093805430 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 08:29:34 compute-0 podman[241687]: 2025-11-22 08:29:34.152709591 +0000 UTC m=+0.096300378 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 08:29:36 compute-0 nova_compute[189268]: 2025-11-22 08:29:36.669 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:38 compute-0 nova_compute[189268]: 2025-11-22 08:29:38.986 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:41 compute-0 podman[241743]: 2025-11-22 08:29:41.11826303 +0000 UTC m=+0.070681627 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:29:41 compute-0 podman[241744]: 2025-11-22 08:29:41.156771748 +0000 UTC m=+0.101698603 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3)
Nov 22 08:29:41 compute-0 nova_compute[189268]: 2025-11-22 08:29:41.670 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:43 compute-0 nova_compute[189268]: 2025-11-22 08:29:43.989 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:44 compute-0 podman[241779]: 2025-11-22 08:29:44.795846868 +0000 UTC m=+0.106573585 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, release=1214.1726694543, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:29:44 compute-0 podman[241780]: 2025-11-22 08:29:44.821355555 +0000 UTC m=+0.122119203 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 22 08:29:46 compute-0 nova_compute[189268]: 2025-11-22 08:29:46.673 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:48 compute-0 nova_compute[189268]: 2025-11-22 08:29:48.992 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:50 compute-0 podman[241821]: 2025-11-22 08:29:50.107935951 +0000 UTC m=+0.068270742 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter)
Nov 22 08:29:51 compute-0 nova_compute[189268]: 2025-11-22 08:29:51.678 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:52 compute-0 podman[241842]: 2025-11-22 08:29:52.139414831 +0000 UTC m=+0.092070024 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:29:53 compute-0 nova_compute[189268]: 2025-11-22 08:29:53.997 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:56 compute-0 nova_compute[189268]: 2025-11-22 08:29:56.679 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:59 compute-0 nova_compute[189268]: 2025-11-22 08:29:59.002 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:29:59 compute-0 podman[203476]: time="2025-11-22T08:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:29:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:29:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 22 08:30:01 compute-0 openstack_network_exporter[205661]: ERROR   08:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:30:01 compute-0 openstack_network_exporter[205661]: ERROR   08:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:30:01 compute-0 openstack_network_exporter[205661]: ERROR   08:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:30:01 compute-0 openstack_network_exporter[205661]: ERROR   08:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:30:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:30:01 compute-0 openstack_network_exporter[205661]: ERROR   08:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:30:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:30:01 compute-0 nova_compute[189268]: 2025-11-22 08:30:01.683 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:04 compute-0 nova_compute[189268]: 2025-11-22 08:30:04.006 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:04 compute-0 podman[241866]: 2025-11-22 08:30:04.30564359 +0000 UTC m=+0.064322796 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 08:30:04 compute-0 podman[241868]: 2025-11-22 08:30:04.325271469 +0000 UTC m=+0.076477393 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:30:04 compute-0 podman[241867]: 2025-11-22 08:30:04.341013243 +0000 UTC m=+0.096295697 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:30:06 compute-0 nova_compute[189268]: 2025-11-22 08:30:06.687 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:09 compute-0 nova_compute[189268]: 2025-11-22 08:30:09.008 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:30:09.964 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:30:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:30:09.965 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:30:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:30:09.965 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:30:11 compute-0 nova_compute[189268]: 2025-11-22 08:30:11.692 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:12 compute-0 podman[241925]: 2025-11-22 08:30:12.136932625 +0000 UTC m=+0.082528346 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:30:12 compute-0 podman[241926]: 2025-11-22 08:30:12.148885688 +0000 UTC m=+0.086760391 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:30:14 compute-0 nova_compute[189268]: 2025-11-22 08:30:14.011 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:15 compute-0 podman[241961]: 2025-11-22 08:30:15.162733428 +0000 UTC m=+0.107127110 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, version=9.4, distribution-scope=public, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 22 08:30:15 compute-0 podman[241962]: 2025-11-22 08:30:15.182810949 +0000 UTC m=+0.116385379 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 08:30:16 compute-0 nova_compute[189268]: 2025-11-22 08:30:16.691 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:19 compute-0 nova_compute[189268]: 2025-11-22 08:30:19.015 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:21 compute-0 podman[242004]: 2025-11-22 08:30:21.133983795 +0000 UTC m=+0.083218924 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:30:21 compute-0 nova_compute[189268]: 2025-11-22 08:30:21.692 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.090 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.090 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.090 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.091 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e6720>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.096 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.099 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8349cde-3de3-4359-9fba-8d329cab9476', 'name': 'vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.099 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.099 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.099 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.099 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:30:22.099895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.103 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.107 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.108 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.108 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.108 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.108 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.108 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.109 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.109 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.109 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.109 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.109 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.110 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.110 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.110 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.110 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.110 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.110 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.110 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.111 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.111 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.111 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.111 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.111 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.111 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.111 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.112 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.112 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.112 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.112 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.112 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:30:22.108965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:30:22.110211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:30:22.111339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:30:22.112527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.135 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 38980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.160 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/cpu volume: 191480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.161 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.161 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.161 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.162 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.162 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.162 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.162 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.162 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.163 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.163 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.163 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:30:22.162147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:30:22.163435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.189 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.189 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.189 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.213 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.214 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.214 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.214 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.215 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.215 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.215 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.215 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:30:22.215502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.269 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.269 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.269 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 nova_compute[189268]: 2025-11-22 08:30:22.302 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:22 compute-0 nova_compute[189268]: 2025-11-22 08:30:22.303 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:30:22 compute-0 nova_compute[189268]: 2025-11-22 08:30:22.303 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.336 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.337 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.337 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.338 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.339 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.339 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.339 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.339 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.339 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.339 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.340 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.340 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.340 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.340 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.340 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.341 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.342 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.343 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.343 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 875417919 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.343 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 107543456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.343 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 90621118 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.343 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.343 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.344 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.345 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.345 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.345 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.345 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.345 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.345 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.345 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.346 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.347 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.347 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.347 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.347 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.347 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.348 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.349 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.349 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.349 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.349 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.349 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.350 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.350 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.350 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.350 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.350 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.350 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.350 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.351 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.352 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.353 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.353 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.353 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 3212925156 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.353 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 13984579 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.353 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.354 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.354 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.354 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.354 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.354 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.354 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.354 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.355 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.356 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.357 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.358 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.359 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.359 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.359 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.359 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.359 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.359 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/memory.usage volume: 49.15234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.360 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:30:22.339719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.361 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:30:22.341573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.362 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:30:22.342595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:30:22.344215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:30:22.345115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:30:22.346714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:30:22.348231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:30:22.350068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:30:22.351880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:30:22.352877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:30:22.354636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:30:22.355369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:30:22.356326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:30:22.357100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:30:22.358084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:30:22.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:30:22.359387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:30:22 compute-0 nova_compute[189268]: 2025-11-22 08:30:22.770 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:30:22 compute-0 nova_compute[189268]: 2025-11-22 08:30:22.770 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:30:22 compute-0 nova_compute[189268]: 2025-11-22 08:30:22.771 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:30:22 compute-0 nova_compute[189268]: 2025-11-22 08:30:22.771 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:30:23 compute-0 podman[242027]: 2025-11-22 08:30:23.146034171 +0000 UTC m=+0.091314533 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:30:23 compute-0 nova_compute[189268]: 2025-11-22 08:30:23.953 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:30:23 compute-0 nova_compute[189268]: 2025-11-22 08:30:23.973 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:30:23 compute-0 nova_compute[189268]: 2025-11-22 08:30:23.973 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:30:23 compute-0 nova_compute[189268]: 2025-11-22 08:30:23.974 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:23 compute-0 nova_compute[189268]: 2025-11-22 08:30:23.974 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:30:23 compute-0 nova_compute[189268]: 2025-11-22 08:30:23.987 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:30:24 compute-0 nova_compute[189268]: 2025-11-22 08:30:24.018 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:24 compute-0 nova_compute[189268]: 2025-11-22 08:30:24.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:26 compute-0 nova_compute[189268]: 2025-11-22 08:30:26.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:26 compute-0 nova_compute[189268]: 2025-11-22 08:30:26.697 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:27 compute-0 nova_compute[189268]: 2025-11-22 08:30:27.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:28 compute-0 nova_compute[189268]: 2025-11-22 08:30:28.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:28 compute-0 nova_compute[189268]: 2025-11-22 08:30:28.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:28 compute-0 nova_compute[189268]: 2025-11-22 08:30:28.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:30:29 compute-0 nova_compute[189268]: 2025-11-22 08:30:29.021 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:29 compute-0 nova_compute[189268]: 2025-11-22 08:30:29.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:29 compute-0 nova_compute[189268]: 2025-11-22 08:30:29.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:30:29 compute-0 podman[203476]: time="2025-11-22T08:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:30:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:30:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 22 08:30:30 compute-0 nova_compute[189268]: 2025-11-22 08:30:30.111 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:31 compute-0 nova_compute[189268]: 2025-11-22 08:30:31.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:31 compute-0 openstack_network_exporter[205661]: ERROR   08:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:30:31 compute-0 openstack_network_exporter[205661]: ERROR   08:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:30:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:30:31 compute-0 openstack_network_exporter[205661]: ERROR   08:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:30:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:30:31 compute-0 openstack_network_exporter[205661]: ERROR   08:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:30:31 compute-0 openstack_network_exporter[205661]: ERROR   08:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:30:31 compute-0 nova_compute[189268]: 2025-11-22 08:30:31.700 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.129 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.130 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.216 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.302 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.303 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.362 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.364 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.426 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.427 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.485 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.494 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.553 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.555 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.625 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.626 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.685 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.686 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:30:33 compute-0 nova_compute[189268]: 2025-11-22 08:30:33.771 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.024 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.205 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.207 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5077MB free_disk=72.48502731323242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.207 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.207 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.371 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.372 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.373 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.373 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.425 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.481 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.482 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.497 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.517 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.578 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.590 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.593 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:30:34 compute-0 nova_compute[189268]: 2025-11-22 08:30:34.593 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:30:35 compute-0 nova_compute[189268]: 2025-11-22 08:30:35.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:30:35 compute-0 podman[242075]: 2025-11-22 08:30:35.162588244 +0000 UTC m=+0.102928657 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:30:35 compute-0 podman[242076]: 2025-11-22 08:30:35.171601347 +0000 UTC m=+0.096841952 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:30:35 compute-0 podman[242080]: 2025-11-22 08:30:35.199284534 +0000 UTC m=+0.123973695 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 08:30:36 compute-0 nova_compute[189268]: 2025-11-22 08:30:36.701 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:39 compute-0 nova_compute[189268]: 2025-11-22 08:30:39.027 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:41 compute-0 nova_compute[189268]: 2025-11-22 08:30:41.704 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:43 compute-0 podman[242134]: 2025-11-22 08:30:43.122334574 +0000 UTC m=+0.073257477 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 22 08:30:43 compute-0 podman[242135]: 2025-11-22 08:30:43.157256236 +0000 UTC m=+0.103728319 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 08:30:44 compute-0 nova_compute[189268]: 2025-11-22 08:30:44.031 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:46 compute-0 podman[242173]: 2025-11-22 08:30:46.134701384 +0000 UTC m=+0.081650613 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:30:46 compute-0 podman[242174]: 2025-11-22 08:30:46.169154383 +0000 UTC m=+0.106868603 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Nov 22 08:30:46 compute-0 nova_compute[189268]: 2025-11-22 08:30:46.708 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:49 compute-0 nova_compute[189268]: 2025-11-22 08:30:49.035 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:51 compute-0 nova_compute[189268]: 2025-11-22 08:30:51.713 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:52 compute-0 podman[242219]: 2025-11-22 08:30:52.124932044 +0000 UTC m=+0.077897449 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 22 08:30:54 compute-0 nova_compute[189268]: 2025-11-22 08:30:54.039 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:54 compute-0 podman[242240]: 2025-11-22 08:30:54.138677241 +0000 UTC m=+0.097801459 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:30:56 compute-0 nova_compute[189268]: 2025-11-22 08:30:56.713 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:59 compute-0 nova_compute[189268]: 2025-11-22 08:30:59.043 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:30:59 compute-0 podman[203476]: time="2025-11-22T08:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:30:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:30:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 22 08:31:01 compute-0 openstack_network_exporter[205661]: ERROR   08:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:31:01 compute-0 openstack_network_exporter[205661]: ERROR   08:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:31:01 compute-0 openstack_network_exporter[205661]: ERROR   08:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:31:01 compute-0 openstack_network_exporter[205661]: ERROR   08:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:31:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:31:01 compute-0 openstack_network_exporter[205661]: ERROR   08:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:31:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:31:01 compute-0 nova_compute[189268]: 2025-11-22 08:31:01.717 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:04 compute-0 nova_compute[189268]: 2025-11-22 08:31:04.046 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:06 compute-0 podman[242264]: 2025-11-22 08:31:06.143135994 +0000 UTC m=+0.084902519 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:31:06 compute-0 podman[242266]: 2025-11-22 08:31:06.172131419 +0000 UTC m=+0.091957911 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:31:06 compute-0 podman[242265]: 2025-11-22 08:31:06.192497551 +0000 UTC m=+0.123209687 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:31:06 compute-0 nova_compute[189268]: 2025-11-22 08:31:06.720 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:09 compute-0 nova_compute[189268]: 2025-11-22 08:31:09.052 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:31:09.965 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:31:09.968 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:31:09.970 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:11 compute-0 nova_compute[189268]: 2025-11-22 08:31:11.725 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:14 compute-0 nova_compute[189268]: 2025-11-22 08:31:14.058 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:14 compute-0 podman[242325]: 2025-11-22 08:31:14.120830845 +0000 UTC m=+0.070819728 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:31:14 compute-0 podman[242324]: 2025-11-22 08:31:14.153288074 +0000 UTC m=+0.103425601 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:31:16 compute-0 nova_compute[189268]: 2025-11-22 08:31:16.730 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:17 compute-0 podman[242362]: 2025-11-22 08:31:17.158247183 +0000 UTC m=+0.107709766 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release=1214.1726694543, vendor=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, vcs-type=git)
Nov 22 08:31:17 compute-0 podman[242363]: 2025-11-22 08:31:17.21056311 +0000 UTC m=+0.152358276 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 22 08:31:19 compute-0 nova_compute[189268]: 2025-11-22 08:31:19.060 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:21 compute-0 nova_compute[189268]: 2025-11-22 08:31:21.733 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:23 compute-0 nova_compute[189268]: 2025-11-22 08:31:23.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:23 compute-0 nova_compute[189268]: 2025-11-22 08:31:23.110 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:31:23 compute-0 podman[242407]: 2025-11-22 08:31:23.180375685 +0000 UTC m=+0.118940971 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Nov 22 08:31:23 compute-0 nova_compute[189268]: 2025-11-22 08:31:23.786 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:31:23 compute-0 nova_compute[189268]: 2025-11-22 08:31:23.786 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:31:23 compute-0 nova_compute[189268]: 2025-11-22 08:31:23.787 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:31:24 compute-0 nova_compute[189268]: 2025-11-22 08:31:24.065 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:25 compute-0 nova_compute[189268]: 2025-11-22 08:31:25.072 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:31:25 compute-0 nova_compute[189268]: 2025-11-22 08:31:25.084 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:31:25 compute-0 nova_compute[189268]: 2025-11-22 08:31:25.085 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:31:25 compute-0 nova_compute[189268]: 2025-11-22 08:31:25.085 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:25 compute-0 podman[242427]: 2025-11-22 08:31:25.166102892 +0000 UTC m=+0.111151550 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:31:26 compute-0 nova_compute[189268]: 2025-11-22 08:31:26.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:26 compute-0 nova_compute[189268]: 2025-11-22 08:31:26.737 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:27 compute-0 nova_compute[189268]: 2025-11-22 08:31:27.107 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:28 compute-0 nova_compute[189268]: 2025-11-22 08:31:28.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:28 compute-0 nova_compute[189268]: 2025-11-22 08:31:28.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:28 compute-0 nova_compute[189268]: 2025-11-22 08:31:28.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:31:29 compute-0 nova_compute[189268]: 2025-11-22 08:31:29.073 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:29 compute-0 podman[203476]: time="2025-11-22T08:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:31:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:31:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 22 08:31:31 compute-0 nova_compute[189268]: 2025-11-22 08:31:31.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:31 compute-0 openstack_network_exporter[205661]: ERROR   08:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:31:31 compute-0 openstack_network_exporter[205661]: ERROR   08:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:31:31 compute-0 openstack_network_exporter[205661]: ERROR   08:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:31:31 compute-0 openstack_network_exporter[205661]: ERROR   08:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:31:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:31:31 compute-0 openstack_network_exporter[205661]: ERROR   08:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:31:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:31:31 compute-0 nova_compute[189268]: 2025-11-22 08:31:31.738 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:32 compute-0 nova_compute[189268]: 2025-11-22 08:31:32.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:33 compute-0 nova_compute[189268]: 2025-11-22 08:31:33.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.082 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.136 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.139 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.140 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.141 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.237 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.318 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.320 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.389 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.396 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.481 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.483 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.577 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.589 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.664 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.674 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.741 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.744 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.818 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.819 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:34 compute-0 nova_compute[189268]: 2025-11-22 08:31:34.884 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.266 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.268 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=72.48502349853516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.269 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.270 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.340 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.341 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.341 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.342 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.410 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.427 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.430 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:31:35 compute-0 nova_compute[189268]: 2025-11-22 08:31:35.431 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:36 compute-0 nova_compute[189268]: 2025-11-22 08:31:36.742 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:37 compute-0 podman[242475]: 2025-11-22 08:31:37.169370724 +0000 UTC m=+0.113389121 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 22 08:31:37 compute-0 podman[242477]: 2025-11-22 08:31:37.181229564 +0000 UTC m=+0.107981274 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 08:31:37 compute-0 podman[242476]: 2025-11-22 08:31:37.188621594 +0000 UTC m=+0.122189489 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:31:39 compute-0 nova_compute[189268]: 2025-11-22 08:31:39.090 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:41 compute-0 nova_compute[189268]: 2025-11-22 08:31:41.748 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:44 compute-0 nova_compute[189268]: 2025-11-22 08:31:44.099 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:44 compute-0 podman[242537]: 2025-11-22 08:31:44.823018092 +0000 UTC m=+0.093152393 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 22 08:31:44 compute-0 podman[242536]: 2025-11-22 08:31:44.857274169 +0000 UTC m=+0.124042779 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4)
Nov 22 08:31:46 compute-0 nova_compute[189268]: 2025-11-22 08:31:46.751 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:48 compute-0 podman[242573]: 2025-11-22 08:31:48.147315427 +0000 UTC m=+0.097884841 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_id=edpm, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0)
Nov 22 08:31:48 compute-0 podman[242574]: 2025-11-22 08:31:48.183052955 +0000 UTC m=+0.127320778 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:31:49 compute-0 nova_compute[189268]: 2025-11-22 08:31:49.107 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:51 compute-0 nova_compute[189268]: 2025-11-22 08:31:51.757 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:31:53.230 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:31:53 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:31:53.231 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:31:53 compute-0 nova_compute[189268]: 2025-11-22 08:31:53.236 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:54 compute-0 nova_compute[189268]: 2025-11-22 08:31:54.112 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:54 compute-0 podman[242621]: 2025-11-22 08:31:54.174436362 +0000 UTC m=+0.122454836 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 22 08:31:56 compute-0 podman[242642]: 2025-11-22 08:31:56.112674064 +0000 UTC m=+0.071738483 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:31:56 compute-0 nova_compute[189268]: 2025-11-22 08:31:56.758 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.700 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.702 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.723 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.804 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.805 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.815 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.816 189273 INFO nova.compute.claims [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.966 189273 DEBUG nova.compute.provider_tree [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:31:58 compute-0 nova_compute[189268]: 2025-11-22 08:31:58.980 189273 DEBUG nova.scheduler.client.report [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.002 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.003 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.051 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.052 189273 DEBUG nova.network.neutron [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.067 189273 INFO nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.102 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.116 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.174 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.176 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.177 189273 INFO nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Creating image(s)
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.178 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.178 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.180 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.199 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.265 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.275 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.275 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.286 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.359 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.361 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.406 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.407 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.408 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.475 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.476 189273 DEBUG nova.virt.disk.api [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Checking if we can resize image /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.477 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.553 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.563 189273 DEBUG nova.virt.disk.api [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Cannot resize image /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.564 189273 DEBUG nova.objects.instance [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'migration_context' on Instance uuid 58ce38a0-b758-4032-bb58-56e47d822dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.577 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.578 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.579 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.592 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.669 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.670 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.671 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.682 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 podman[203476]: time="2025-11-22T08:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:31:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.760 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.763 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.816 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.eph0 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.826 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.826 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.898 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.900 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.901 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Ensure instance console log exists: /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.902 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.902 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:31:59 compute-0 nova_compute[189268]: 2025-11-22 08:31:59.903 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.265 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.277 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.297 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.368 189273 DEBUG nova.network.neutron [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Successfully updated port: 814f8d81-07a0-4d19-bc9a-0d33f4273c1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.372 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.372 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.383 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.384 189273 INFO nova.compute.claims [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.390 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.390 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquired lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.391 189273 DEBUG nova.network.neutron [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.501 189273 DEBUG nova.compute.manager [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received event network-changed-814f8d81-07a0-4d19-bc9a-0d33f4273c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.502 189273 DEBUG nova.compute.manager [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Refreshing instance network info cache due to event network-changed-814f8d81-07a0-4d19-bc9a-0d33f4273c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.503 189273 DEBUG oslo_concurrency.lockutils [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.568 189273 DEBUG nova.compute.provider_tree [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.583 189273 DEBUG nova.network.neutron [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.599 189273 DEBUG nova.scheduler.client.report [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.623 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.624 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.674 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.675 189273 DEBUG nova.network.neutron [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.692 189273 INFO nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.731 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.819 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.827 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.827 189273 INFO nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Creating image(s)
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.828 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.829 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.830 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.847 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.929 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.932 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.933 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:00 compute-0 nova_compute[189268]: 2025-11-22 08:32:00.955 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.029 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.031 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.081 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.093 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.094 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.166 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.168 189273 DEBUG nova.virt.disk.api [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Checking if we can resize image /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.168 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.253 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.256 189273 DEBUG nova.virt.disk.api [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Cannot resize image /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.257 189273 DEBUG nova.objects.instance [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'migration_context' on Instance uuid cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.271 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.272 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.273 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.286 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.347 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.356 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.357 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.368 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.401 189273 DEBUG nova.network.neutron [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Updating instance_info_cache with network_info: [{"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:32:01 compute-0 openstack_network_exporter[205661]: ERROR   08:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:32:01 compute-0 openstack_network_exporter[205661]: ERROR   08:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:32:01 compute-0 openstack_network_exporter[205661]: ERROR   08:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:32:01 compute-0 openstack_network_exporter[205661]: ERROR   08:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:32:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:32:01 compute-0 openstack_network_exporter[205661]: ERROR   08:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:32:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.422 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Releasing lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.423 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Instance network_info: |[{"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.424 189273 DEBUG oslo_concurrency.lockutils [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.425 189273 DEBUG nova.network.neutron [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Refreshing network info cache for port 814f8d81-07a0-4d19-bc9a-0d33f4273c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.431 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Start _get_guest_xml network_info=[{"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}], 'ephemerals': [{'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1, 'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.444 189273 WARNING nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.458 189273 DEBUG nova.virt.libvirt.host [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.459 189273 DEBUG nova.virt.libvirt.host [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.467 189273 DEBUG nova.virt.libvirt.host [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.468 189273 DEBUG nova.virt.libvirt.host [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.469 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.469 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:23:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='796e25a8-f28d-499e-b2fb-dfae32f0eed7',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.470 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.470 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.471 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.471 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.472 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.472 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.473 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.473 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.474 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.474 189273 DEBUG nova.virt.hardware [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.480 189273 DEBUG nova.virt.libvirt.vif [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:31:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-taee7tfsx64m-77hwefpqvacz-vnf-nuiiyhjth6rc',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-taee7tfsx64m-77hwefpqvacz-vnf-nuiiyhjth6rc',id=3,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-z9l1lg3b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:31:59Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTYxMjU4MDMxOTE2MDk5Mzc5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 22 08:32:01 compute-0 nova_compute[189268]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTYxMjU4MDMxOTE2MDk5Mzc5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=58ce38a0-b758-4032-bb58-56e47d822dbd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.481 189273 DEBUG nova.network.os_vif_util [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.482 189273 DEBUG nova.network.os_vif_util [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:43:35,bridge_name='br-int',has_traffic_filtering=True,id=814f8d81-07a0-4d19-bc9a-0d33f4273c1e,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap814f8d81-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.483 189273 DEBUG nova.objects.instance [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58ce38a0-b758-4032-bb58-56e47d822dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.496 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <uuid>58ce38a0-b758-4032-bb58-56e47d822dbd</uuid>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <name>instance-00000003</name>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <memory>524288</memory>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <nova:name>vn-qv6tptr-taee7tfsx64m-77hwefpqvacz-vnf-nuiiyhjth6rc</nova:name>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:32:01</nova:creationTime>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <nova:flavor name="m1.small">
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:memory>512</nova:memory>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:ephemeral>1</nova:ephemeral>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:user uuid="27ed1dd009ad4e29863ab5e3a9826c94">admin</nova:user>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:project uuid="80e46844b3824928a6138235e5ede512">admin</nova:project>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="de9f57cf-28b4-4cbd-b943-19aa098356bf"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         <nova:port uuid="814f8d81-07a0-4d19-bc9a-0d33f4273c1e">
Nov 22 08:32:01 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="192.168.0.242" ipVersion="4"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <system>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <entry name="serial">58ce38a0-b758-4032-bb58-56e47d822dbd</entry>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <entry name="uuid">58ce38a0-b758-4032-bb58-56e47d822dbd</entry>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </system>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <os>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   </os>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <features>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   </features>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.eph0"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <target dev="vdb" bus="virtio"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.config"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:48:43:35"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <target dev="tap814f8d81-07"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/console.log" append="off"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <video>
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </video>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:32:01 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:32:01 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:32:01 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:32:01 compute-0 nova_compute[189268]: </domain>
Nov 22 08:32:01 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.498 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Preparing to wait for external event network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.498 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.498 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.499 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.499 189273 DEBUG nova.virt.libvirt.vif [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:31:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-taee7tfsx64m-77hwefpqvacz-vnf-nuiiyhjth6rc',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-taee7tfsx64m-77hwefpqvacz-vnf-nuiiyhjth6rc',id=3,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-z9l1lg3b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:31:59Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTYxMjU4MDMxOTE2MDk5Mzc5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 22 08:32:01 compute-0 nova_compute[189268]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTYxMjU4MDMxOTE2MDk5Mzc5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=58ce38a0-b758-4032-bb58-56e47d822dbd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.500 189273 DEBUG nova.network.os_vif_util [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.500 189273 DEBUG nova.network.os_vif_util [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:43:35,bridge_name='br-int',has_traffic_filtering=True,id=814f8d81-07a0-4d19-bc9a-0d33f4273c1e,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap814f8d81-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.501 189273 DEBUG os_vif [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:43:35,bridge_name='br-int',has_traffic_filtering=True,id=814f8d81-07a0-4d19-bc9a-0d33f4273c1e,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap814f8d81-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.501 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.502 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.502 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.503 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.504 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.526 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.527 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap814f8d81-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.528 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap814f8d81-07, col_values=(('external_ids', {'iface-id': '814f8d81-07a0-4d19-bc9a-0d33f4273c1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:43:35', 'vm-uuid': '58ce38a0-b758-4032-bb58-56e47d822dbd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.530 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:01 compute-0 NetworkManager[56326]: <info>  [1763800321.5316] manager: (tap814f8d81-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.531 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.542 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.543 189273 INFO os_vif [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:43:35,bridge_name='br-int',has_traffic_filtering=True,id=814f8d81-07a0-4d19-bc9a-0d33f4273c1e,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap814f8d81-07')
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.571 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 1073741824" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.572 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.573 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.625 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.626 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.626 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.627 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No VIF found with MAC fa:16:3e:48:43:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:32:01 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:32:01.480 189273 DEBUG nova.virt.libvirt.vif [None req-d7066ba0-a6 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.627 189273 INFO nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Using config drive
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.659 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.661 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.662 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Ensure instance console log exists: /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.663 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.664 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.665 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:01 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:32:01.499 189273 DEBUG nova.virt.libvirt.vif [None req-d7066ba0-a6 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.759 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.904 189273 DEBUG nova.network.neutron [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Successfully updated port: 3a644b09-361d-48d6-8efe-a180b1177788 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.920 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.921 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquired lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.922 189273 DEBUG nova.network.neutron [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:32:01 compute-0 nova_compute[189268]: 2025-11-22 08:32:01.989 189273 INFO nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Creating config drive at /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.config
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.002 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq69u5le4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.046 189273 DEBUG nova.network.neutron [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.146 189273 DEBUG oslo_concurrency.processutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq69u5le4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:02 compute-0 NetworkManager[56326]: <info>  [1763800322.2388] manager: (tap814f8d81-07): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 22 08:32:02 compute-0 kernel: tap814f8d81-07: entered promiscuous mode
Nov 22 08:32:02 compute-0 ovn_controller[97783]: 2025-11-22T08:32:02Z|00040|binding|INFO|Claiming lport 814f8d81-07a0-4d19-bc9a-0d33f4273c1e for this chassis.
Nov 22 08:32:02 compute-0 ovn_controller[97783]: 2025-11-22T08:32:02Z|00041|binding|INFO|814f8d81-07a0-4d19-bc9a-0d33f4273c1e: Claiming fa:16:3e:48:43:35 192.168.0.242
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.261 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 ovn_controller[97783]: 2025-11-22T08:32:02Z|00042|binding|INFO|Setting lport 814f8d81-07a0-4d19-bc9a-0d33f4273c1e ovn-installed in OVS
Nov 22 08:32:02 compute-0 ovn_controller[97783]: 2025-11-22T08:32:02Z|00043|binding|INFO|Setting lport 814f8d81-07a0-4d19-bc9a-0d33f4273c1e up in Southbound
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.266 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:43:35 192.168.0.242'], port_security=['fa:16:3e:48:43:35 192.168.0.242'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-taee7tfsx64m-77hwefpqvacz-port-diran7zxat5l', 'neutron:cidrs': '192.168.0.242/24', 'neutron:device_id': '58ce38a0-b758-4032-bb58-56e47d822dbd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-taee7tfsx64m-77hwefpqvacz-port-diran7zxat5l', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=814f8d81-07a0-4d19-bc9a-0d33f4273c1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.268 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 814f8d81-07a0-4d19-bc9a-0d33f4273c1e in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 bound to our chassis
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.270 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.275 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 systemd-machined[155703]: New machine qemu-3-instance-00000003.
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.299 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[cbad54ab-1ae4-4af4-9f56-b763be0fedbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:02 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.324 189273 DEBUG nova.compute.manager [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received event network-changed-3a644b09-361d-48d6-8efe-a180b1177788 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.326 189273 DEBUG nova.compute.manager [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Refreshing instance network info cache due to event network-changed-3a644b09-361d-48d6-8efe-a180b1177788. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.327 189273 DEBUG oslo_concurrency.lockutils [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.341 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[4721658d-4e9a-463f-9160-53b4afab3040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:02 compute-0 systemd-udevd[242744]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.345 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[928baf60-79f4-4364-94bb-3388cf86c8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:02 compute-0 NetworkManager[56326]: <info>  [1763800322.3641] device (tap814f8d81-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:32:02 compute-0 NetworkManager[56326]: <info>  [1763800322.3651] device (tap814f8d81-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.382 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1adf4a-c99f-4c43-b856-61fa9c9825e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.438 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[9e792132-3fb9-471f-b240-c66af1e71477]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 25253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242754, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.466 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[71af2dea-657e-41b9-a8ed-ae8096634b7f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242756, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242756, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.469 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.473 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.475 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.476 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.477 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.478 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:02.478 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.697 189273 DEBUG nova.network.neutron [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Updated VIF entry in instance network info cache for port 814f8d81-07a0-4d19-bc9a-0d33f4273c1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.699 189273 DEBUG nova.network.neutron [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Updating instance_info_cache with network_info: [{"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.712 189273 DEBUG oslo_concurrency.lockutils [req-1a5f5bba-9ab9-4d1e-b2cd-ffbd33b5a41e req-04bf906c-0095-45d9-b0fb-8c19a61dbf66 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.728 189273 DEBUG nova.network.neutron [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updating instance_info_cache with network_info: [{"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.747 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Releasing lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.748 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Instance network_info: |[{"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.749 189273 DEBUG oslo_concurrency.lockutils [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.750 189273 DEBUG nova.network.neutron [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Refreshing network info cache for port 3a644b09-361d-48d6-8efe-a180b1177788 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.757 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Start _get_guest_xml network_info=[{"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}], 'ephemerals': [{'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1, 'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.783 189273 WARNING nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.792 189273 DEBUG nova.virt.libvirt.host [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.794 189273 DEBUG nova.virt.libvirt.host [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.802 189273 DEBUG nova.virt.libvirt.host [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.803 189273 DEBUG nova.virt.libvirt.host [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.804 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.804 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:23:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='796e25a8-f28d-499e-b2fb-dfae32f0eed7',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.805 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.806 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.806 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.807 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.807 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.808 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.809 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.809 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.810 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.811 189273 DEBUG nova.virt.hardware [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.817 189273 DEBUG nova.virt.libvirt.vif [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:31:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp',id=4,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-ju3bsu4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:32:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDE1NzkxOTcyMTIyMTUzNTk1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 22 08:32:02 compute-0 nova_compute[189268]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDE1NzkxOTcyMTIyMTUzNTk1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.817 189273 DEBUG nova.network.os_vif_util [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.819 189273 DEBUG nova.network.os_vif_util [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:9f:dc,bridge_name='br-int',has_traffic_filtering=True,id=3a644b09-361d-48d6-8efe-a180b1177788,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a644b09-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.821 189273 DEBUG nova.objects.instance [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.835 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <uuid>cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435</uuid>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <name>instance-00000004</name>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <memory>524288</memory>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <nova:name>vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp</nova:name>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:32:02</nova:creationTime>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <nova:flavor name="m1.small">
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:memory>512</nova:memory>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:ephemeral>1</nova:ephemeral>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:user uuid="27ed1dd009ad4e29863ab5e3a9826c94">admin</nova:user>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:project uuid="80e46844b3824928a6138235e5ede512">admin</nova:project>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="de9f57cf-28b4-4cbd-b943-19aa098356bf"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         <nova:port uuid="3a644b09-361d-48d6-8efe-a180b1177788">
Nov 22 08:32:02 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="192.168.0.192" ipVersion="4"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <system>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <entry name="serial">cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435</entry>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <entry name="uuid">cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435</entry>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </system>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <os>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   </os>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <features>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   </features>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <target dev="vdb" bus="virtio"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.config"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:7d:9f:dc"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <target dev="tap3a644b09-36"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/console.log" append="off"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <video>
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </video>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:32:02 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:32:02 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:32:02 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:32:02 compute-0 nova_compute[189268]: </domain>
Nov 22 08:32:02 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.837 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Preparing to wait for external event network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.838 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.838 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.839 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.840 189273 DEBUG nova.virt.libvirt.vif [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:31:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp',id=4,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-ju3bsu4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:32:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDE1NzkxOTcyMTIyMTUzNTk1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 22 08:32:02 compute-0 nova_compute[189268]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDE1NzkxOTcyMTIyMTUzNTk1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.841 189273 DEBUG nova.network.os_vif_util [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.842 189273 DEBUG nova.network.os_vif_util [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:9f:dc,bridge_name='br-int',has_traffic_filtering=True,id=3a644b09-361d-48d6-8efe-a180b1177788,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a644b09-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.842 189273 DEBUG os_vif [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:9f:dc,bridge_name='br-int',has_traffic_filtering=True,id=3a644b09-361d-48d6-8efe-a180b1177788,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a644b09-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.843 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.844 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.845 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.850 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.851 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a644b09-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.852 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a644b09-36, col_values=(('external_ids', {'iface-id': '3a644b09-361d-48d6-8efe-a180b1177788', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:9f:dc', 'vm-uuid': 'cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.854 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 NetworkManager[56326]: <info>  [1763800322.8563] manager: (tap3a644b09-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.858 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.868 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.869 189273 INFO os_vif [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:9f:dc,bridge_name='br-int',has_traffic_filtering=True,id=3a644b09-361d-48d6-8efe-a180b1177788,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a644b09-36')
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.924 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.925 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.925 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.926 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No VIF found with MAC fa:16:3e:7d:9f:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:32:02 compute-0 nova_compute[189268]: 2025-11-22 08:32:02.926 189273 INFO nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Using config drive
Nov 22 08:32:03 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:32:02.817 189273 DEBUG nova.virt.libvirt.vif [None req-eaaf2e3e-ec [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:32:03 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:32:02.840 189273 DEBUG nova.virt.libvirt.vif [None req-eaaf2e3e-ec [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.137 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800323.1355212, 58ce38a0-b758-4032-bb58-56e47d822dbd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.138 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] VM Started (Lifecycle Event)
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.156 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.164 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800323.1362875, 58ce38a0-b758-4032-bb58-56e47d822dbd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.165 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] VM Paused (Lifecycle Event)
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.183 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.189 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:32:03 compute-0 nova_compute[189268]: 2025-11-22 08:32:03.202 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:32:03 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:03.235 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:04 compute-0 nova_compute[189268]: 2025-11-22 08:32:04.919 189273 INFO nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Creating config drive at /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.config
Nov 22 08:32:04 compute-0 nova_compute[189268]: 2025-11-22 08:32:04.934 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyldqqw04 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.036 189273 DEBUG nova.compute.manager [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received event network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.039 189273 DEBUG oslo_concurrency.lockutils [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.040 189273 DEBUG oslo_concurrency.lockutils [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.041 189273 DEBUG oslo_concurrency.lockutils [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.041 189273 DEBUG nova.compute.manager [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Processing event network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.042 189273 DEBUG nova.compute.manager [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received event network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.043 189273 DEBUG oslo_concurrency.lockutils [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.044 189273 DEBUG oslo_concurrency.lockutils [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.045 189273 DEBUG oslo_concurrency.lockutils [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.046 189273 DEBUG nova.compute.manager [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] No waiting events found dispatching network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.047 189273 WARNING nova.compute.manager [req-bdf55190-4fc0-4639-ae00-29fac4e03860 req-39ed7378-c1f2-4c07-bee2-7f1a33959e8d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received unexpected event network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e for instance with vm_state building and task_state spawning.
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.049 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.056 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800325.0552466, 58ce38a0-b758-4032-bb58-56e47d822dbd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.057 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] VM Resumed (Lifecycle Event)
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.063 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.072 189273 INFO nova.virt.libvirt.driver [-] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Instance spawned successfully.
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.073 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.077 189273 DEBUG oslo_concurrency.processutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyldqqw04" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.100 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.120 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.129 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.130 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.131 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.131 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.132 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.132 189273 DEBUG nova.virt.libvirt.driver [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.170 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:32:05 compute-0 NetworkManager[56326]: <info>  [1763800325.1886] manager: (tap3a644b09-36): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 22 08:32:05 compute-0 kernel: tap3a644b09-36: entered promiscuous mode
Nov 22 08:32:05 compute-0 ovn_controller[97783]: 2025-11-22T08:32:05Z|00044|binding|INFO|Claiming lport 3a644b09-361d-48d6-8efe-a180b1177788 for this chassis.
Nov 22 08:32:05 compute-0 ovn_controller[97783]: 2025-11-22T08:32:05Z|00045|binding|INFO|3a644b09-361d-48d6-8efe-a180b1177788: Claiming fa:16:3e:7d:9f:dc 192.168.0.192
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.200 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:05 compute-0 NetworkManager[56326]: <info>  [1763800325.2092] device (tap3a644b09-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:32:05 compute-0 NetworkManager[56326]: <info>  [1763800325.2100] device (tap3a644b09-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.215 189273 INFO nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Took 6.04 seconds to spawn the instance on the hypervisor.
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.216 189273 DEBUG nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:05 compute-0 ovn_controller[97783]: 2025-11-22T08:32:05Z|00046|binding|INFO|Setting lport 3a644b09-361d-48d6-8efe-a180b1177788 ovn-installed in OVS
Nov 22 08:32:05 compute-0 ovn_controller[97783]: 2025-11-22T08:32:05Z|00047|binding|INFO|Setting lport 3a644b09-361d-48d6-8efe-a180b1177788 up in Southbound
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.224 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:9f:dc 192.168.0.192'], port_security=['fa:16:3e:7d:9f:dc 192.168.0.192'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-hea4zpteaolv-dnc7x4xkssdg-port-wswwvb7qczwb', 'neutron:cidrs': '192.168.0.192/24', 'neutron:device_id': 'cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-hea4zpteaolv-dnc7x4xkssdg-port-wswwvb7qczwb', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=3a644b09-361d-48d6-8efe-a180b1177788) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:32:05 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.228 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 3a644b09-361d-48d6-8efe-a180b1177788 in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 bound to our chassis
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.226 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.231 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.232 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.254 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b537d8c6-41ea-4eb0-82a7-8405179c04c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:05 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 08:32:05 compute-0 systemd-machined[155703]: New machine qemu-4-instance-00000004.
Nov 22 08:32:05 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.296 189273 INFO nova.compute.manager [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Took 6.52 seconds to build instance.
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.302 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[eca9af52-e72c-4bbe-aace-d523daf81294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.306 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[60ce1a7a-de0f-4874-9c46-9e63800380bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.324 189273 DEBUG oslo_concurrency.lockutils [None req-d7066ba0-a603-46d3-9849-0eabcb081d6d 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.353 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[b276c2fb-8ab0-4f28-8af1-99ebcc49982f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.378 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[06d1a79b-0b97-42dc-9d8b-c5eb2ae3972e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 25253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242818, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.402 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9b54ad-a2e4-47ca-bf15-92a5f6b90f31]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242821, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242821, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.405 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.407 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.409 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.410 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.410 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.411 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:05.412 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.850 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800325.849937, cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.850 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] VM Started (Lifecycle Event)
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.872 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.878 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800325.8501024, cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.878 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] VM Paused (Lifecycle Event)
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.894 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.899 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:32:05 compute-0 nova_compute[189268]: 2025-11-22 08:32:05.923 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.600 189273 DEBUG nova.compute.manager [req-a4125986-e8f9-402e-9bfa-e14a53132290 req-65b1dd27-72e9-439e-81c0-f0521b361d18 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received event network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.600 189273 DEBUG oslo_concurrency.lockutils [req-a4125986-e8f9-402e-9bfa-e14a53132290 req-65b1dd27-72e9-439e-81c0-f0521b361d18 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.600 189273 DEBUG oslo_concurrency.lockutils [req-a4125986-e8f9-402e-9bfa-e14a53132290 req-65b1dd27-72e9-439e-81c0-f0521b361d18 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.601 189273 DEBUG oslo_concurrency.lockutils [req-a4125986-e8f9-402e-9bfa-e14a53132290 req-65b1dd27-72e9-439e-81c0-f0521b361d18 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.601 189273 DEBUG nova.compute.manager [req-a4125986-e8f9-402e-9bfa-e14a53132290 req-65b1dd27-72e9-439e-81c0-f0521b361d18 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Processing event network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.601 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.607 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.613 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800326.6130571, cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.613 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] VM Resumed (Lifecycle Event)
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.618 189273 INFO nova.virt.libvirt.driver [-] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Instance spawned successfully.
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.618 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.640 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.650 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.650 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.651 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.651 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.652 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.653 189273 DEBUG nova.virt.libvirt.driver [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.661 189273 DEBUG nova.network.neutron [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updated VIF entry in instance network info cache for port 3a644b09-361d-48d6-8efe-a180b1177788. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.661 189273 DEBUG nova.network.neutron [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updating instance_info_cache with network_info: [{"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.667 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.713 189273 DEBUG oslo_concurrency.lockutils [req-11963514-84f7-4a23-9fbb-984d8254b1fb req-63879de1-d592-43d8-a7e0-61f6b2be1ee4 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.728 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.762 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.767 189273 INFO nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Took 5.94 seconds to spawn the instance on the hypervisor.
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.768 189273 DEBUG nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.839 189273 INFO nova.compute.manager [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Took 6.49 seconds to build instance.
Nov 22 08:32:06 compute-0 nova_compute[189268]: 2025-11-22 08:32:06.859 189273 DEBUG oslo_concurrency.lockutils [None req-eaaf2e3e-ec9a-4529-95cb-833d4c346408 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:07 compute-0 nova_compute[189268]: 2025-11-22 08:32:07.856 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:08 compute-0 podman[242832]: 2025-11-22 08:32:08.154869071 +0000 UTC m=+0.089821693 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:32:08 compute-0 podman[242830]: 2025-11-22 08:32:08.15709044 +0000 UTC m=+0.099304109 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:32:08 compute-0 podman[242831]: 2025-11-22 08:32:08.191515522 +0000 UTC m=+0.130724899 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:32:08 compute-0 nova_compute[189268]: 2025-11-22 08:32:08.694 189273 DEBUG nova.compute.manager [req-75f9725f-aa79-4f11-b91c-aca6fc9ebeee req-bda5e698-bacc-43e7-9369-8d33632dbd73 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received event network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:08 compute-0 nova_compute[189268]: 2025-11-22 08:32:08.694 189273 DEBUG oslo_concurrency.lockutils [req-75f9725f-aa79-4f11-b91c-aca6fc9ebeee req-bda5e698-bacc-43e7-9369-8d33632dbd73 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:08 compute-0 nova_compute[189268]: 2025-11-22 08:32:08.694 189273 DEBUG oslo_concurrency.lockutils [req-75f9725f-aa79-4f11-b91c-aca6fc9ebeee req-bda5e698-bacc-43e7-9369-8d33632dbd73 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:08 compute-0 nova_compute[189268]: 2025-11-22 08:32:08.695 189273 DEBUG oslo_concurrency.lockutils [req-75f9725f-aa79-4f11-b91c-aca6fc9ebeee req-bda5e698-bacc-43e7-9369-8d33632dbd73 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:08 compute-0 nova_compute[189268]: 2025-11-22 08:32:08.695 189273 DEBUG nova.compute.manager [req-75f9725f-aa79-4f11-b91c-aca6fc9ebeee req-bda5e698-bacc-43e7-9369-8d33632dbd73 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] No waiting events found dispatching network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:32:08 compute-0 nova_compute[189268]: 2025-11-22 08:32:08.695 189273 WARNING nova.compute.manager [req-75f9725f-aa79-4f11-b91c-aca6fc9ebeee req-bda5e698-bacc-43e7-9369-8d33632dbd73 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received unexpected event network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 for instance with vm_state active and task_state None.
Nov 22 08:32:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:09.966 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:09.967 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:09.968 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.329 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.329 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.330 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.330 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.330 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.331 189273 INFO nova.compute.manager [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Terminating instance
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.332 189273 DEBUG nova.compute.manager [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:32:10 compute-0 kernel: tap814f8d81-07 (unregistering): left promiscuous mode
Nov 22 08:32:10 compute-0 NetworkManager[56326]: <info>  [1763800330.3655] device (tap814f8d81-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.386 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 ovn_controller[97783]: 2025-11-22T08:32:10Z|00048|binding|INFO|Releasing lport 814f8d81-07a0-4d19-bc9a-0d33f4273c1e from this chassis (sb_readonly=0)
Nov 22 08:32:10 compute-0 ovn_controller[97783]: 2025-11-22T08:32:10Z|00049|binding|INFO|Setting lport 814f8d81-07a0-4d19-bc9a-0d33f4273c1e down in Southbound
Nov 22 08:32:10 compute-0 ovn_controller[97783]: 2025-11-22T08:32:10Z|00050|binding|INFO|Removing iface tap814f8d81-07 ovn-installed in OVS
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.397 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:43:35 192.168.0.242'], port_security=['fa:16:3e:48:43:35 192.168.0.242'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-taee7tfsx64m-77hwefpqvacz-port-diran7zxat5l', 'neutron:cidrs': '192.168.0.242/24', 'neutron:device_id': '58ce38a0-b758-4032-bb58-56e47d822dbd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-taee7tfsx64m-77hwefpqvacz-port-diran7zxat5l', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=814f8d81-07a0-4d19-bc9a-0d33f4273c1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.399 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 814f8d81-07a0-4d19-bc9a-0d33f4273c1e in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 unbound from our chassis
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.401 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.407 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.421 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9dc8fd-a95e-4b14-98f9-2be6b52bc384]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:10 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 22 08:32:10 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 6.182s CPU time.
Nov 22 08:32:10 compute-0 systemd-machined[155703]: Machine qemu-3-instance-00000003 terminated.
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.463 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[5773bfa1-4f27-426f-b93a-7a1dc4ac122e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.467 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[df0caeda-0c5a-4663-8b4a-a2e147f0e1f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.504 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[5389cc45-eddd-49cd-b438-cc3403024048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.527 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[9c22c82b-c265-4156-8e2c-b8ba443a1af4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 25253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242898, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.555 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e59c43-6f14-4b89-ae97-d11be293da32]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242899, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242899, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.557 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.563 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.566 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.577 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.578 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.579 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.579 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:32:10.580 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.634 189273 INFO nova.virt.libvirt.driver [-] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Instance destroyed successfully.
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.635 189273 DEBUG nova.objects.instance [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'resources' on Instance uuid 58ce38a0-b758-4032-bb58-56e47d822dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.658 189273 DEBUG nova.virt.libvirt.vif [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:31:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-taee7tfsx64m-77hwefpqvacz-vnf-nuiiyhjth6rc',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-taee7tfsx64m-77hwefpqvacz-vnf-nuiiyhjth6rc',id=3,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:32:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-z9l1lg3b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:32:05Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTYxMjU4MDMxOTE2MDk5Mzc5MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 22 08:32:10 compute-0 nova_compute[189268]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTYxMjU4MDMxOTE2MDk5Mzc5MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU2MTI1ODAzMTkxNjA5OTM3OTE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NjEyNTgwMzE5MTYwOTkzNzkxPT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=58ce38a0-b758-4032-bb58-56e47d822dbd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.658 189273 DEBUG nova.network.os_vif_util [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "address": "fa:16:3e:48:43:35", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.242", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap814f8d81-07", "ovs_interfaceid": "814f8d81-07a0-4d19-bc9a-0d33f4273c1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.659 189273 DEBUG nova.network.os_vif_util [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:43:35,bridge_name='br-int',has_traffic_filtering=True,id=814f8d81-07a0-4d19-bc9a-0d33f4273c1e,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap814f8d81-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.660 189273 DEBUG os_vif [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:43:35,bridge_name='br-int',has_traffic_filtering=True,id=814f8d81-07a0-4d19-bc9a-0d33f4273c1e,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap814f8d81-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.663 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.664 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap814f8d81-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.666 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.669 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.669 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.672 189273 INFO os_vif [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:43:35,bridge_name='br-int',has_traffic_filtering=True,id=814f8d81-07a0-4d19-bc9a-0d33f4273c1e,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap814f8d81-07')
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.673 189273 INFO nova.virt.libvirt.driver [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Deleting instance files /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd_del
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.675 189273 INFO nova.virt.libvirt.driver [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Deletion of /var/lib/nova/instances/58ce38a0-b758-4032-bb58-56e47d822dbd_del complete
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.735 189273 DEBUG nova.virt.libvirt.host [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.736 189273 INFO nova.virt.libvirt.host [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] UEFI support detected
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.739 189273 INFO nova.compute.manager [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Took 0.41 seconds to destroy the instance on the hypervisor.
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.740 189273 DEBUG oslo.service.loopingcall [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.740 189273 DEBUG nova.compute.manager [-] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:32:10 compute-0 nova_compute[189268]: 2025-11-22 08:32:10.740 189273 DEBUG nova.network.neutron [-] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:32:11 compute-0 nova_compute[189268]: 2025-11-22 08:32:11.011 189273 DEBUG nova.compute.manager [req-fb598bde-d8cc-4697-ac4b-dae15a90110f req-b1fe24a0-3382-48bb-8a2c-9c4bdeb2051f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received event network-vif-unplugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:11 compute-0 nova_compute[189268]: 2025-11-22 08:32:11.011 189273 DEBUG oslo_concurrency.lockutils [req-fb598bde-d8cc-4697-ac4b-dae15a90110f req-b1fe24a0-3382-48bb-8a2c-9c4bdeb2051f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:11 compute-0 nova_compute[189268]: 2025-11-22 08:32:11.011 189273 DEBUG oslo_concurrency.lockutils [req-fb598bde-d8cc-4697-ac4b-dae15a90110f req-b1fe24a0-3382-48bb-8a2c-9c4bdeb2051f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:11 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:32:10.658 189273 DEBUG nova.virt.libvirt.vif [None req-8c8ac4c3-82 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:32:11 compute-0 nova_compute[189268]: 2025-11-22 08:32:11.012 189273 DEBUG oslo_concurrency.lockutils [req-fb598bde-d8cc-4697-ac4b-dae15a90110f req-b1fe24a0-3382-48bb-8a2c-9c4bdeb2051f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:11 compute-0 nova_compute[189268]: 2025-11-22 08:32:11.019 189273 DEBUG nova.compute.manager [req-fb598bde-d8cc-4697-ac4b-dae15a90110f req-b1fe24a0-3382-48bb-8a2c-9c4bdeb2051f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] No waiting events found dispatching network-vif-unplugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:32:11 compute-0 nova_compute[189268]: 2025-11-22 08:32:11.019 189273 DEBUG nova.compute.manager [req-fb598bde-d8cc-4697-ac4b-dae15a90110f req-b1fe24a0-3382-48bb-8a2c-9c4bdeb2051f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received event network-vif-unplugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:32:11 compute-0 nova_compute[189268]: 2025-11-22 08:32:11.769 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.433 189273 DEBUG nova.compute.manager [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received event network-changed-814f8d81-07a0-4d19-bc9a-0d33f4273c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.434 189273 DEBUG nova.compute.manager [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Refreshing instance network info cache due to event network-changed-814f8d81-07a0-4d19-bc9a-0d33f4273c1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.434 189273 DEBUG oslo_concurrency.lockutils [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.434 189273 DEBUG oslo_concurrency.lockutils [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.434 189273 DEBUG nova.network.neutron [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Refreshing network info cache for port 814f8d81-07a0-4d19-bc9a-0d33f4273c1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.888 189273 INFO nova.network.neutron [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Port 814f8d81-07a0-4d19-bc9a-0d33f4273c1e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.888 189273 DEBUG nova.network.neutron [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:32:12 compute-0 nova_compute[189268]: 2025-11-22 08:32:12.936 189273 DEBUG oslo_concurrency.lockutils [req-570a21e9-1f90-4ff1-8f61-9364406020a8 req-3bc1c9aa-e711-4a84-868f-cedcf23a74f3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-58ce38a0-b758-4032-bb58-56e47d822dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.112 189273 DEBUG nova.compute.manager [req-a8064b12-c306-4331-8bbc-1fb261568c86 req-03d6f416-4037-4586-9863-ba4375670d0f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received event network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.112 189273 DEBUG oslo_concurrency.lockutils [req-a8064b12-c306-4331-8bbc-1fb261568c86 req-03d6f416-4037-4586-9863-ba4375670d0f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.112 189273 DEBUG oslo_concurrency.lockutils [req-a8064b12-c306-4331-8bbc-1fb261568c86 req-03d6f416-4037-4586-9863-ba4375670d0f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.113 189273 DEBUG oslo_concurrency.lockutils [req-a8064b12-c306-4331-8bbc-1fb261568c86 req-03d6f416-4037-4586-9863-ba4375670d0f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.113 189273 DEBUG nova.compute.manager [req-a8064b12-c306-4331-8bbc-1fb261568c86 req-03d6f416-4037-4586-9863-ba4375670d0f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] No waiting events found dispatching network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.113 189273 WARNING nova.compute.manager [req-a8064b12-c306-4331-8bbc-1fb261568c86 req-03d6f416-4037-4586-9863-ba4375670d0f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Received unexpected event network-vif-plugged-814f8d81-07a0-4d19-bc9a-0d33f4273c1e for instance with vm_state active and task_state deleting.
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.338 189273 DEBUG nova.network.neutron [-] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.362 189273 INFO nova.compute.manager [-] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Took 2.62 seconds to deallocate network for instance.
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.553 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.554 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.656 189273 DEBUG nova.compute.provider_tree [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.672 189273 DEBUG nova.scheduler.client.report [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.698 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.721 189273 INFO nova.scheduler.client.report [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Deleted allocations for instance 58ce38a0-b758-4032-bb58-56e47d822dbd
Nov 22 08:32:13 compute-0 nova_compute[189268]: 2025-11-22 08:32:13.798 189273 DEBUG oslo_concurrency.lockutils [None req-8c8ac4c3-825c-4c0b-94c9-114e7a020083 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "58ce38a0-b758-4032-bb58-56e47d822dbd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:15 compute-0 podman[242922]: 2025-11-22 08:32:15.153769304 +0000 UTC m=+0.104362056 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 22 08:32:15 compute-0 podman[242923]: 2025-11-22 08:32:15.184071014 +0000 UTC m=+0.118336564 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 08:32:15 compute-0 nova_compute[189268]: 2025-11-22 08:32:15.667 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:16 compute-0 nova_compute[189268]: 2025-11-22 08:32:16.775 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:19 compute-0 podman[242962]: 2025-11-22 08:32:19.183091255 +0000 UTC m=+0.113961566 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, architecture=x86_64, io.buildah.version=1.29.0, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Nov 22 08:32:19 compute-0 podman[242963]: 2025-11-22 08:32:19.236070219 +0000 UTC m=+0.160435173 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:32:20 compute-0 nova_compute[189268]: 2025-11-22 08:32:20.673 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:21 compute-0 nova_compute[189268]: 2025-11-22 08:32:21.779 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.090 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.091 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.092 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.092 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e52b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.102 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.105 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.107 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.981 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Sat, 22 Nov 2025 08:32:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-658e1f87-09c3-40dd-ac9b-3ffdff6e2f86 x-openstack-request-id: req-658e1f87-09c3-40dd-ac9b-3ffdff6e2f86 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.982 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435", "name": "vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp", "status": "ACTIVE", "tenant_id": "80e46844b3824928a6138235e5ede512", "user_id": "27ed1dd009ad4e29863ab5e3a9826c94", "metadata": {"metering.server_group": "209b9e59-811e-4c2b-a756-c29ba92c4b5c"}, "hostId": "984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4", "image": {"id": "de9f57cf-28b4-4cbd-b943-19aa098356bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de9f57cf-28b4-4cbd-b943-19aa098356bf"}]}, "flavor": {"id": "796e25a8-f28d-499e-b2fb-dfae32f0eed7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/796e25a8-f28d-499e-b2fb-dfae32f0eed7"}]}, "created": "2025-11-22T08:31:59Z", "updated": "2025-11-22T08:32:06Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.192", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7d:9f:dc"}, {"version": 4, "addr": "192.168.122.207", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7d:9f:dc"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-22T08:32:06.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.982 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 used request id req-658e1f87-09c3-40dd-ac9b-3ffdff6e2f86 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.984 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435', 'name': 'vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.988 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8349cde-3de3-4359-9fba-8d329cab9476', 'name': 'vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.989 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.989 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.989 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.989 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:32:22.989420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:22.995 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.000 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 / tap3a644b09-36 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.001 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.005 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes volume: 8448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.006 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.006 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.006 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.007 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.007 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.007 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.007 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.007 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:32:23.007265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.008 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.008 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.009 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.009 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.009 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.009 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.009 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.009 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.010 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:32:23.009430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.010 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.010 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.011 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.011 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.011 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.011 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.011 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.011 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:32:23.011384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.012 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.012 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.012 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.013 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.013 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.013 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.013 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.013 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:32:23.013760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.045 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 40700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.079 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/cpu volume: 15910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.116 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/cpu volume: 296450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.117 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.117 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.117 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.118 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.118 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.118 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.118 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.118 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.118 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:32:23.118171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.119 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.120 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.120 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.120 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.120 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.120 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:32:23.120603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.155 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.156 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.156 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.193 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.194 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.195 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.227 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.228 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.228 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.229 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.229 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.229 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.230 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.230 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.230 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:32:23.230247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.358 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.359 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.360 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.442 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.443 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.443 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.518 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.519 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.520 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.521 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.522 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.523 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.523 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.523 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.524 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:32:23.523267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.524 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.525 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.525 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.526 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.526 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.527 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.527 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.529 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.529 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.530 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.530 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.530 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.530 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.531 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.531 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.532 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes volume: 7436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.532 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.533 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.533 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.533 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.533 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:32:23.530764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.534 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.534 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:32:23.534379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.534 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp>]
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.535 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.535 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.535 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.535 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.536 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.536 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.536 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.536 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.537 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 909182636 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.537 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.537 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 16585329 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.538 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 875417919 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.538 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 107543456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.538 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 90621118 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.539 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.539 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.540 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.540 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.540 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.540 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.540 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:32:23.535956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:32:23.540594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.541 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.541 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes.delta volume: 3599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.542 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.542 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.542 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.543 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.543 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.543 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:32:23.543322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.543 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.544 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.544 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.545 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.545 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.545 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.546 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.546 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.546 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.547 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.547 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.547 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.548 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.548 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.548 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.548 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.549 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.549 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.549 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.550 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.551 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:32:23.548344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.551 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.552 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.552 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.553 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.553 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.553 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.553 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.554 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.554 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.554 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.554 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:32:23.554113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.555 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.555 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.556 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.556 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.556 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.557 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.557 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.558 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.558 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.558 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.558 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.558 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.559 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.559 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:32:23.559075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.559 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.560 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.560 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.560 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.561 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.561 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.562 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.562 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.563 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.563 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.563 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.563 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.563 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.564 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.564 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:32:23.564025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.564 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.565 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.565 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.565 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.566 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.566 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.566 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.566 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.566 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.567 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.567 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:32:23.566532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.568 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.568 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.568 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.569 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 3215790755 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.569 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 13984579 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.570 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.570 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.570 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.571 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.571 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.571 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.571 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.572 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.572 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.572 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.573 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.573 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.573 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.573 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:32:23.571509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:32:23.573496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.574 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.574 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets volume: 56 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.575 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.575 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.575 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.575 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.576 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.576 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.577 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.577 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.577 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.577 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.577 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.578 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:32:23.576165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.578 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:32:23.578232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.579 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.579 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.579 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.579 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.579 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.580 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.580 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.580 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.580 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.580 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:32:23.580172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.581 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes.delta volume: 2544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.581 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.581 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.581 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.581 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.582 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.582 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:32:23.582134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.582 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.582 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp>]
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.582 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.582 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.583 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.583 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.583 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.583 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.583 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.583 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:32:23.583235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.584 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/memory.usage volume: 49.0078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.584 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:32:23.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:32:24 compute-0 nova_compute[189268]: 2025-11-22 08:32:24.433 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:24 compute-0 nova_compute[189268]: 2025-11-22 08:32:24.434 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:32:24 compute-0 nova_compute[189268]: 2025-11-22 08:32:24.435 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:32:24 compute-0 nova_compute[189268]: 2025-11-22 08:32:24.842 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:32:24 compute-0 nova_compute[189268]: 2025-11-22 08:32:24.843 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:32:24 compute-0 nova_compute[189268]: 2025-11-22 08:32:24.843 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:32:24 compute-0 nova_compute[189268]: 2025-11-22 08:32:24.844 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:32:25 compute-0 podman[243004]: 2025-11-22 08:32:25.18396963 +0000 UTC m=+0.132312412 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 22 08:32:25 compute-0 nova_compute[189268]: 2025-11-22 08:32:25.629 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763800330.628154, 58ce38a0-b758-4032-bb58-56e47d822dbd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:32:25 compute-0 nova_compute[189268]: 2025-11-22 08:32:25.630 189273 INFO nova.compute.manager [-] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] VM Stopped (Lifecycle Event)
Nov 22 08:32:25 compute-0 nova_compute[189268]: 2025-11-22 08:32:25.656 189273 DEBUG nova.compute.manager [None req-7aac1424-a62a-475b-b37a-c7e039d49bef - - - - - -] [instance: 58ce38a0-b758-4032-bb58-56e47d822dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:32:25 compute-0 nova_compute[189268]: 2025-11-22 08:32:25.675 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:26 compute-0 nova_compute[189268]: 2025-11-22 08:32:26.781 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:27 compute-0 nova_compute[189268]: 2025-11-22 08:32:27.102 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:32:27 compute-0 nova_compute[189268]: 2025-11-22 08:32:27.114 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:32:27 compute-0 nova_compute[189268]: 2025-11-22 08:32:27.114 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:32:27 compute-0 nova_compute[189268]: 2025-11-22 08:32:27.114 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:27 compute-0 podman[243025]: 2025-11-22 08:32:27.135600704 +0000 UTC m=+0.088372073 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:32:28 compute-0 sshd-session[243023]: Invalid user hadoop from 80.94.92.164 port 55124
Nov 22 08:32:28 compute-0 nova_compute[189268]: 2025-11-22 08:32:28.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:28 compute-0 nova_compute[189268]: 2025-11-22 08:32:28.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:28 compute-0 nova_compute[189268]: 2025-11-22 08:32:28.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:28 compute-0 nova_compute[189268]: 2025-11-22 08:32:28.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:32:28 compute-0 sshd-session[243023]: Connection closed by invalid user hadoop 80.94.92.164 port 55124 [preauth]
Nov 22 08:32:29 compute-0 nova_compute[189268]: 2025-11-22 08:32:29.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:29 compute-0 podman[203476]: time="2025-11-22T08:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:32:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:32:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 22 08:32:30 compute-0 nova_compute[189268]: 2025-11-22 08:32:30.679 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:31 compute-0 openstack_network_exporter[205661]: ERROR   08:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:32:31 compute-0 openstack_network_exporter[205661]: ERROR   08:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:32:31 compute-0 openstack_network_exporter[205661]: ERROR   08:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:32:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:32:31 compute-0 openstack_network_exporter[205661]: ERROR   08:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:32:31 compute-0 openstack_network_exporter[205661]: ERROR   08:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:32:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:32:31 compute-0 nova_compute[189268]: 2025-11-22 08:32:31.784 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:33 compute-0 nova_compute[189268]: 2025-11-22 08:32:33.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:35 compute-0 nova_compute[189268]: 2025-11-22 08:32:35.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:35 compute-0 nova_compute[189268]: 2025-11-22 08:32:35.684 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.128 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.257 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.363 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.369 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.449 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.451 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.537 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.539 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.602 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.611 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.685 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.691 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.753 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.756 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.786 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.826 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.827 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.895 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.904 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.969 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:36 compute-0 nova_compute[189268]: 2025-11-22 08:32:36.975 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.039 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.041 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.119 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.121 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.188 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.615 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.617 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4893MB free_disk=72.48399353027344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.618 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.618 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.690 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.692 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.692 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.692 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.693 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.773 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.787 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.810 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:32:37 compute-0 nova_compute[189268]: 2025-11-22 08:32:37.811 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:32:39 compute-0 podman[243087]: 2025-11-22 08:32:39.171255215 +0000 UTC m=+0.106414622 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:32:39 compute-0 podman[243089]: 2025-11-22 08:32:39.177384121 +0000 UTC m=+0.096401040 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:32:39 compute-0 podman[243088]: 2025-11-22 08:32:39.214215218 +0000 UTC m=+0.132557930 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:32:40 compute-0 nova_compute[189268]: 2025-11-22 08:32:40.687 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:41 compute-0 ovn_controller[97783]: 2025-11-22T08:32:41Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:9f:dc 192.168.0.192
Nov 22 08:32:41 compute-0 ovn_controller[97783]: 2025-11-22T08:32:41Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:9f:dc 192.168.0.192
Nov 22 08:32:41 compute-0 nova_compute[189268]: 2025-11-22 08:32:41.788 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:45 compute-0 ovn_controller[97783]: 2025-11-22T08:32:45Z|00051|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 22 08:32:45 compute-0 nova_compute[189268]: 2025-11-22 08:32:45.692 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:46 compute-0 podman[243163]: 2025-11-22 08:32:46.119369858 +0000 UTC m=+0.074692123 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:32:46 compute-0 podman[243164]: 2025-11-22 08:32:46.132947765 +0000 UTC m=+0.080140360 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm)
Nov 22 08:32:46 compute-0 nova_compute[189268]: 2025-11-22 08:32:46.790 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:50 compute-0 podman[243205]: 2025-11-22 08:32:50.16974838 +0000 UTC m=+0.110399509 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, container_name=kepler, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, version=9.4, io.openshift.expose-services=, name=ubi9, release=1214.1726694543)
Nov 22 08:32:50 compute-0 podman[243206]: 2025-11-22 08:32:50.188660962 +0000 UTC m=+0.133041042 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:32:50 compute-0 nova_compute[189268]: 2025-11-22 08:32:50.695 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:51 compute-0 nova_compute[189268]: 2025-11-22 08:32:51.794 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:55 compute-0 nova_compute[189268]: 2025-11-22 08:32:55.699 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:56 compute-0 podman[243249]: 2025-11-22 08:32:56.182468868 +0000 UTC m=+0.118757246 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350)
Nov 22 08:32:56 compute-0 nova_compute[189268]: 2025-11-22 08:32:56.797 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:32:58 compute-0 podman[243269]: 2025-11-22 08:32:58.149867257 +0000 UTC m=+0.108528281 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:32:59 compute-0 podman[203476]: time="2025-11-22T08:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:32:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:32:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 08:33:00 compute-0 nova_compute[189268]: 2025-11-22 08:33:00.702 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:01 compute-0 openstack_network_exporter[205661]: ERROR   08:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:33:01 compute-0 openstack_network_exporter[205661]: ERROR   08:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:33:01 compute-0 openstack_network_exporter[205661]: ERROR   08:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:33:01 compute-0 openstack_network_exporter[205661]: ERROR   08:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:33:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:33:01 compute-0 openstack_network_exporter[205661]: ERROR   08:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:33:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:33:01 compute-0 nova_compute[189268]: 2025-11-22 08:33:01.800 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:05 compute-0 nova_compute[189268]: 2025-11-22 08:33:05.706 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:06 compute-0 nova_compute[189268]: 2025-11-22 08:33:06.805 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:33:09.968 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:33:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:33:09.969 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:33:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:33:09.970 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:33:10 compute-0 podman[243294]: 2025-11-22 08:33:10.147358354 +0000 UTC m=+0.097822689 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 08:33:10 compute-0 podman[243295]: 2025-11-22 08:33:10.158237389 +0000 UTC m=+0.100424329 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:33:10 compute-0 podman[243296]: 2025-11-22 08:33:10.159183855 +0000 UTC m=+0.093852751 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:33:10 compute-0 nova_compute[189268]: 2025-11-22 08:33:10.711 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:11 compute-0 nova_compute[189268]: 2025-11-22 08:33:11.808 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:15 compute-0 nova_compute[189268]: 2025-11-22 08:33:15.714 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:16 compute-0 nova_compute[189268]: 2025-11-22 08:33:16.811 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:17 compute-0 podman[243362]: 2025-11-22 08:33:17.144201538 +0000 UTC m=+0.091946210 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:33:17 compute-0 podman[243363]: 2025-11-22 08:33:17.146930442 +0000 UTC m=+0.088882937 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:33:20 compute-0 nova_compute[189268]: 2025-11-22 08:33:20.716 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:21 compute-0 podman[243402]: 2025-11-22 08:33:21.160076298 +0000 UTC m=+0.108507480 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc.)
Nov 22 08:33:21 compute-0 podman[243403]: 2025-11-22 08:33:21.226672308 +0000 UTC m=+0.159164637 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 22 08:33:21 compute-0 nova_compute[189268]: 2025-11-22 08:33:21.813 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:25 compute-0 nova_compute[189268]: 2025-11-22 08:33:25.718 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:25 compute-0 nova_compute[189268]: 2025-11-22 08:33:25.812 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:25 compute-0 nova_compute[189268]: 2025-11-22 08:33:25.813 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:33:26 compute-0 nova_compute[189268]: 2025-11-22 08:33:26.475 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:33:26 compute-0 nova_compute[189268]: 2025-11-22 08:33:26.476 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:33:26 compute-0 nova_compute[189268]: 2025-11-22 08:33:26.476 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:33:26 compute-0 nova_compute[189268]: 2025-11-22 08:33:26.816 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:27 compute-0 podman[243446]: 2025-11-22 08:33:27.15968998 +0000 UTC m=+0.097505292 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.026 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.047 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.049 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.050 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.051 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.052 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:29 compute-0 nova_compute[189268]: 2025-11-22 08:33:29.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:29 compute-0 podman[243466]: 2025-11-22 08:33:29.162612634 +0000 UTC m=+0.107947796 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:33:29 compute-0 podman[203476]: time="2025-11-22T08:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:33:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:33:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 22 08:33:30 compute-0 nova_compute[189268]: 2025-11-22 08:33:30.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:30 compute-0 nova_compute[189268]: 2025-11-22 08:33:30.722 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:31 compute-0 nova_compute[189268]: 2025-11-22 08:33:31.093 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:31 compute-0 openstack_network_exporter[205661]: ERROR   08:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:33:31 compute-0 openstack_network_exporter[205661]: ERROR   08:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:33:31 compute-0 openstack_network_exporter[205661]: ERROR   08:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:33:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:33:31 compute-0 openstack_network_exporter[205661]: ERROR   08:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:33:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:33:31 compute-0 openstack_network_exporter[205661]: ERROR   08:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:33:31 compute-0 nova_compute[189268]: 2025-11-22 08:33:31.819 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:35 compute-0 nova_compute[189268]: 2025-11-22 08:33:35.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:35 compute-0 nova_compute[189268]: 2025-11-22 08:33:35.725 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:36 compute-0 nova_compute[189268]: 2025-11-22 08:33:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:36 compute-0 nova_compute[189268]: 2025-11-22 08:33:36.822 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.135 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.139 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.139 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.252 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.354 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.356 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.441 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.443 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.508 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.509 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.610 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.619 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.711 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.712 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.812 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.813 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.900 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.903 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.969 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:37 compute-0 nova_compute[189268]: 2025-11-22 08:33:37.979 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.061 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.062 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.126 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.127 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.211 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.212 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.311 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.759 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.760 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4876MB free_disk=72.46060943603516GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.760 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.761 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.836 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.837 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.837 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.837 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.837 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.903 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.918 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.920 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:33:38 compute-0 nova_compute[189268]: 2025-11-22 08:33:38.921 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:33:40 compute-0 nova_compute[189268]: 2025-11-22 08:33:40.728 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:41 compute-0 podman[243529]: 2025-11-22 08:33:41.148617478 +0000 UTC m=+0.075117543 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:33:41 compute-0 podman[243527]: 2025-11-22 08:33:41.168099987 +0000 UTC m=+0.100375409 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:33:41 compute-0 podman[243528]: 2025-11-22 08:33:41.200054706 +0000 UTC m=+0.124721461 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:33:41 compute-0 nova_compute[189268]: 2025-11-22 08:33:41.825 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:45 compute-0 nova_compute[189268]: 2025-11-22 08:33:45.731 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:46 compute-0 nova_compute[189268]: 2025-11-22 08:33:46.827 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:48 compute-0 podman[243589]: 2025-11-22 08:33:48.16196914 +0000 UTC m=+0.093835931 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 08:33:48 compute-0 podman[243588]: 2025-11-22 08:33:48.170928573 +0000 UTC m=+0.111409528 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 22 08:33:50 compute-0 nova_compute[189268]: 2025-11-22 08:33:50.735 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:51 compute-0 nova_compute[189268]: 2025-11-22 08:33:51.830 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:52 compute-0 podman[243626]: 2025-11-22 08:33:52.177266205 +0000 UTC m=+0.127727592 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Nov 22 08:33:52 compute-0 podman[243627]: 2025-11-22 08:33:52.205862852 +0000 UTC m=+0.150606964 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:33:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:33:55.103 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:33:55 compute-0 nova_compute[189268]: 2025-11-22 08:33:55.104 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:55 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:33:55.105 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:33:55 compute-0 nova_compute[189268]: 2025-11-22 08:33:55.738 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:56 compute-0 nova_compute[189268]: 2025-11-22 08:33:56.832 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:33:58 compute-0 podman[243668]: 2025-11-22 08:33:58.125991924 +0000 UTC m=+0.076571723 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:33:59 compute-0 podman[203476]: time="2025-11-22T08:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:33:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:33:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 08:34:00 compute-0 podman[243688]: 2025-11-22 08:34:00.158281965 +0000 UTC m=+0.104654286 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:34:00 compute-0 nova_compute[189268]: 2025-11-22 08:34:00.740 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:01 compute-0 openstack_network_exporter[205661]: ERROR   08:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:34:01 compute-0 openstack_network_exporter[205661]: ERROR   08:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:34:01 compute-0 openstack_network_exporter[205661]: ERROR   08:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:34:01 compute-0 openstack_network_exporter[205661]: ERROR   08:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:34:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:34:01 compute-0 openstack_network_exporter[205661]: ERROR   08:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:34:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:34:01 compute-0 nova_compute[189268]: 2025-11-22 08:34:01.836 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.325 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.326 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.342 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.443 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.444 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.456 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.457 189273 INFO nova.compute.claims [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.637 189273 DEBUG nova.compute.provider_tree [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.655 189273 DEBUG nova.scheduler.client.report [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.674 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.675 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.987 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:34:04 compute-0 nova_compute[189268]: 2025-11-22 08:34:04.987 189273 DEBUG nova.network.neutron [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.009 189273 INFO nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.041 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:34:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:05.106 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.126 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.130 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.131 189273 INFO nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Creating image(s)
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.132 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.133 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.135 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.164 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.228 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.229 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.231 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.251 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.316 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.318 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.367 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4,backing_fmt=raw /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.368 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "3743d624bf4f49380cb6de0480bbb028361f5cb4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.368 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.448 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.449 189273 DEBUG nova.virt.disk.api [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Checking if we can resize image /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.450 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.512 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.513 189273 DEBUG nova.virt.disk.api [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Cannot resize image /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.515 189273 DEBUG nova.objects.instance [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'migration_context' on Instance uuid 64e4ab2b-2a08-4c3c-9561-94454cb0b482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.530 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.530 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.532 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.545 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.604 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.605 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.605 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.623 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.684 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.685 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.727 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.729 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.730 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.745 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.794 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.795 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.797 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Ensure instance console log exists: /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.798 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.798 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:05 compute-0 nova_compute[189268]: 2025-11-22 08:34:05.799 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.514 189273 DEBUG nova.network.neutron [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Successfully updated port: 433ff318-0c74-4ba4-ac48-8114bc74a566 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.530 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.531 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquired lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.531 189273 DEBUG nova.network.neutron [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.613 189273 DEBUG nova.compute.manager [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received event network-changed-433ff318-0c74-4ba4-ac48-8114bc74a566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.614 189273 DEBUG nova.compute.manager [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Refreshing instance network info cache due to event network-changed-433ff318-0c74-4ba4-ac48-8114bc74a566. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.614 189273 DEBUG oslo_concurrency.lockutils [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.680 189273 DEBUG nova.network.neutron [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:34:06 compute-0 nova_compute[189268]: 2025-11-22 08:34:06.838 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:08 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.892 189273 DEBUG nova.network.neutron [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updating instance_info_cache with network_info: [{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.914 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Releasing lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.915 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Instance network_info: |[{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.915 189273 DEBUG oslo_concurrency.lockutils [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.915 189273 DEBUG nova.network.neutron [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Refreshing network info cache for port 433ff318-0c74-4ba4-ac48-8114bc74a566 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.918 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Start _get_guest_xml network_info=[{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}], 'ephemerals': [{'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1, 'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.924 189273 WARNING nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.931 189273 DEBUG nova.virt.libvirt.host [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.931 189273 DEBUG nova.virt.libvirt.host [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.935 189273 DEBUG nova.virt.libvirt.host [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.936 189273 DEBUG nova.virt.libvirt.host [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.936 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.937 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:23:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='796e25a8-f28d-499e-b2fb-dfae32f0eed7',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:23:24Z,direct_url=<?>,disk_format='qcow2',id=de9f57cf-28b4-4cbd-b943-19aa098356bf,min_disk=0,min_ram=0,name='cirros',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:23:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.937 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.938 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.938 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.938 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.938 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.939 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.939 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.940 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.940 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.940 189273 DEBUG nova.virt.hardware [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.944 189273 DEBUG nova.virt.libvirt.vif [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf',id=5,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-dm7ragq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:34:05Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTIyMjcyMjgzNTIzMzgyMzc3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 22 08:34:09 compute-0 nova_compute[189268]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTIyMjcyMjgzNTIzMzgyMzc3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=64e4ab2b-2a08-4c3c-9561-94454cb0b482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.944 189273 DEBUG nova.network.os_vif_util [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.945 189273 DEBUG nova.network.os_vif_util [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:1a:4a,bridge_name='br-int',has_traffic_filtering=True,id=433ff318-0c74-4ba4-ac48-8114bc74a566,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap433ff318-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.946 189273 DEBUG nova.objects.instance [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64e4ab2b-2a08-4c3c-9561-94454cb0b482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.957 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <uuid>64e4ab2b-2a08-4c3c-9561-94454cb0b482</uuid>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <name>instance-00000005</name>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <memory>524288</memory>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <nova:name>vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf</nova:name>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:34:09</nova:creationTime>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <nova:flavor name="m1.small">
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:memory>512</nova:memory>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:ephemeral>1</nova:ephemeral>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:user uuid="27ed1dd009ad4e29863ab5e3a9826c94">admin</nova:user>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:project uuid="80e46844b3824928a6138235e5ede512">admin</nova:project>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="de9f57cf-28b4-4cbd-b943-19aa098356bf"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         <nova:port uuid="433ff318-0c74-4ba4-ac48-8114bc74a566">
Nov 22 08:34:09 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="192.168.0.63" ipVersion="4"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <system>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <entry name="serial">64e4ab2b-2a08-4c3c-9561-94454cb0b482</entry>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <entry name="uuid">64e4ab2b-2a08-4c3c-9561-94454cb0b482</entry>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </system>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <os>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   </os>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <features>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   </features>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <target dev="vdb" bus="virtio"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.config"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:4d:1a:4a"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <target dev="tap433ff318-0c"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/console.log" append="off"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <video>
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </video>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:34:09 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:34:09 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:34:09 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:34:09 compute-0 nova_compute[189268]: </domain>
Nov 22 08:34:09 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.958 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Preparing to wait for external event network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.959 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.959 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.959 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.960 189273 DEBUG nova.virt.libvirt.vif [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf',id=5,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-dm7ragq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:34:05Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTIyMjcyMjgzNTIzMzgyMzc3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 22 08:34:09 compute-0 nova_compute[189268]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTIyMjcyMjgzNTIzMzgyMzc3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=64e4ab2b-2a08-4c3c-9561-94454cb0b482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.960 189273 DEBUG nova.network.os_vif_util [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.961 189273 DEBUG nova.network.os_vif_util [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:1a:4a,bridge_name='br-int',has_traffic_filtering=True,id=433ff318-0c74-4ba4-ac48-8114bc74a566,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap433ff318-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.961 189273 DEBUG os_vif [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:1a:4a,bridge_name='br-int',has_traffic_filtering=True,id=433ff318-0c74-4ba4-ac48-8114bc74a566,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap433ff318-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.961 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.962 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.962 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.966 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.966 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap433ff318-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.967 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap433ff318-0c, col_values=(('external_ids', {'iface-id': '433ff318-0c74-4ba4-ac48-8114bc74a566', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:1a:4a', 'vm-uuid': '64e4ab2b-2a08-4c3c-9561-94454cb0b482'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.969 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:09.969 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:09 compute-0 NetworkManager[56326]: <info>  [1763800449.9700] manager: (tap433ff318-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 22 08:34:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:09.970 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:34:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:09.971 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.980 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:09 compute-0 nova_compute[189268]: 2025-11-22 08:34:09.981 189273 INFO os_vif [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:1a:4a,bridge_name='br-int',has_traffic_filtering=True,id=433ff318-0c74-4ba4-ac48-8114bc74a566,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap433ff318-0c')
Nov 22 08:34:10 compute-0 nova_compute[189268]: 2025-11-22 08:34:10.022 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:34:10 compute-0 nova_compute[189268]: 2025-11-22 08:34:10.022 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:34:10 compute-0 nova_compute[189268]: 2025-11-22 08:34:10.022 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:34:10 compute-0 nova_compute[189268]: 2025-11-22 08:34:10.022 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No VIF found with MAC fa:16:3e:4d:1a:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:34:10 compute-0 nova_compute[189268]: 2025-11-22 08:34:10.023 189273 INFO nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Using config drive
Nov 22 08:34:10 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:34:09.944 189273 DEBUG nova.virt.libvirt.vif [None req-86991938-ee [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:34:10 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:34:09.960 189273 DEBUG nova.virt.libvirt.vif [None req-86991938-ee [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:34:10 compute-0 nova_compute[189268]: 2025-11-22 08:34:10.983 189273 INFO nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Creating config drive at /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.config
Nov 22 08:34:10 compute-0 nova_compute[189268]: 2025-11-22 08:34:10.990 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiry2827_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:11 compute-0 nova_compute[189268]: 2025-11-22 08:34:11.130 189273 DEBUG oslo_concurrency.processutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiry2827_" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:11 compute-0 kernel: tap433ff318-0c: entered promiscuous mode
Nov 22 08:34:11 compute-0 NetworkManager[56326]: <info>  [1763800451.2118] manager: (tap433ff318-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 22 08:34:11 compute-0 ovn_controller[97783]: 2025-11-22T08:34:11Z|00052|binding|INFO|Claiming lport 433ff318-0c74-4ba4-ac48-8114bc74a566 for this chassis.
Nov 22 08:34:11 compute-0 ovn_controller[97783]: 2025-11-22T08:34:11Z|00053|binding|INFO|433ff318-0c74-4ba4-ac48-8114bc74a566: Claiming fa:16:3e:4d:1a:4a 192.168.0.63
Nov 22 08:34:11 compute-0 nova_compute[189268]: 2025-11-22 08:34:11.214 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.224 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:1a:4a 192.168.0.63'], port_security=['fa:16:3e:4d:1a:4a 192.168.0.63'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-cfkm2etzuijf-gntxycdg4jfb-port-v6rwy3qsqi6x', 'neutron:cidrs': '192.168.0.63/24', 'neutron:device_id': '64e4ab2b-2a08-4c3c-9561-94454cb0b482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-cfkm2etzuijf-gntxycdg4jfb-port-v6rwy3qsqi6x', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=433ff318-0c74-4ba4-ac48-8114bc74a566) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.225 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 433ff318-0c74-4ba4-ac48-8114bc74a566 in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 bound to our chassis
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.226 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:34:11 compute-0 ovn_controller[97783]: 2025-11-22T08:34:11Z|00054|binding|INFO|Setting lport 433ff318-0c74-4ba4-ac48-8114bc74a566 ovn-installed in OVS
Nov 22 08:34:11 compute-0 ovn_controller[97783]: 2025-11-22T08:34:11Z|00055|binding|INFO|Setting lport 433ff318-0c74-4ba4-ac48-8114bc74a566 up in Southbound
Nov 22 08:34:11 compute-0 nova_compute[189268]: 2025-11-22 08:34:11.237 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.242 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5d04567c-f87f-4b47-9bbb-647c4e808b0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:34:11 compute-0 nova_compute[189268]: 2025-11-22 08:34:11.251 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:11 compute-0 systemd-udevd[243796]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:34:11 compute-0 systemd-machined[155703]: New machine qemu-5-instance-00000005.
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.278 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[016e4893-21ac-4179-9c61-582c4d4b6c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.282 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3cb1c6-ccdf-440e-9863-f37075b71a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:34:11 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 22 08:34:11 compute-0 NetworkManager[56326]: <info>  [1763800451.2940] device (tap433ff318-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:34:11 compute-0 NetworkManager[56326]: <info>  [1763800451.2979] device (tap433ff318-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:34:11 compute-0 podman[243755]: 2025-11-22 08:34:11.318697543 +0000 UTC m=+0.115806839 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:34:11 compute-0 podman[243753]: 2025-11-22 08:34:11.31673627 +0000 UTC m=+0.115173592 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.317 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[56355cf5-3878-4767-b20b-a7792b921f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.348 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9d1c9e-c99a-4ba1-8949-1f01985549a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 39670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243821, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.366 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b85303b9-4540-4ead-a2bd-31dc977233ec]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243834, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243834, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.368 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:34:11 compute-0 podman[243771]: 2025-11-22 08:34:11.369286987 +0000 UTC m=+0.122368246 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:34:11 compute-0 nova_compute[189268]: 2025-11-22 08:34:11.370 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:11 compute-0 nova_compute[189268]: 2025-11-22 08:34:11.371 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.372 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.372 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.373 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:34:11 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:34:11.373 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:34:11 compute-0 nova_compute[189268]: 2025-11-22 08:34:11.841 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.046 189273 DEBUG nova.compute.manager [req-6af637a7-a982-418e-908d-d197de6690fa req-6af4306a-f23e-4502-b3ef-c781c64fdd9d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received event network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.047 189273 DEBUG oslo_concurrency.lockutils [req-6af637a7-a982-418e-908d-d197de6690fa req-6af4306a-f23e-4502-b3ef-c781c64fdd9d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.048 189273 DEBUG oslo_concurrency.lockutils [req-6af637a7-a982-418e-908d-d197de6690fa req-6af4306a-f23e-4502-b3ef-c781c64fdd9d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.048 189273 DEBUG oslo_concurrency.lockutils [req-6af637a7-a982-418e-908d-d197de6690fa req-6af4306a-f23e-4502-b3ef-c781c64fdd9d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.048 189273 DEBUG nova.compute.manager [req-6af637a7-a982-418e-908d-d197de6690fa req-6af4306a-f23e-4502-b3ef-c781c64fdd9d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Processing event network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.049 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800452.0434074, 64e4ab2b-2a08-4c3c-9561-94454cb0b482 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.049 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] VM Started (Lifecycle Event)
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.051 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.062 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.069 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.071 189273 INFO nova.virt.libvirt.driver [-] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Instance spawned successfully.
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.071 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.076 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.095 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.096 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.096 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.097 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.097 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.097 189273 DEBUG nova.virt.libvirt.driver [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.106 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.107 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800452.0436857, 64e4ab2b-2a08-4c3c-9561-94454cb0b482 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.107 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] VM Paused (Lifecycle Event)
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.131 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.136 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800452.0549214, 64e4ab2b-2a08-4c3c-9561-94454cb0b482 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.136 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] VM Resumed (Lifecycle Event)
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.155 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.161 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.167 189273 INFO nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Took 7.04 seconds to spawn the instance on the hypervisor.
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.167 189273 DEBUG nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.178 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.225 189273 INFO nova.compute.manager [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Took 7.83 seconds to build instance.
Nov 22 08:34:12 compute-0 nova_compute[189268]: 2025-11-22 08:34:12.241 189273 DEBUG oslo_concurrency.lockutils [None req-86991938-ee8e-4686-a560-4cddcac94844 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:13 compute-0 nova_compute[189268]: 2025-11-22 08:34:13.005 189273 DEBUG nova.network.neutron [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updated VIF entry in instance network info cache for port 433ff318-0c74-4ba4-ac48-8114bc74a566. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:34:13 compute-0 nova_compute[189268]: 2025-11-22 08:34:13.006 189273 DEBUG nova.network.neutron [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updating instance_info_cache with network_info: [{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:34:13 compute-0 nova_compute[189268]: 2025-11-22 08:34:13.019 189273 DEBUG oslo_concurrency.lockutils [req-d079e502-e81b-432e-b802-bbc03ea3b16b req-a5a20bee-d754-4caa-a934-39a6b201e9e6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:34:14 compute-0 nova_compute[189268]: 2025-11-22 08:34:14.139 189273 DEBUG nova.compute.manager [req-2f46c572-34a4-47e3-a78c-e45e09fa155a req-51be2ec6-6d4e-4130-b845-076fc68b56f8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received event network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:34:14 compute-0 nova_compute[189268]: 2025-11-22 08:34:14.139 189273 DEBUG oslo_concurrency.lockutils [req-2f46c572-34a4-47e3-a78c-e45e09fa155a req-51be2ec6-6d4e-4130-b845-076fc68b56f8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:14 compute-0 nova_compute[189268]: 2025-11-22 08:34:14.139 189273 DEBUG oslo_concurrency.lockutils [req-2f46c572-34a4-47e3-a78c-e45e09fa155a req-51be2ec6-6d4e-4130-b845-076fc68b56f8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:14 compute-0 nova_compute[189268]: 2025-11-22 08:34:14.140 189273 DEBUG oslo_concurrency.lockutils [req-2f46c572-34a4-47e3-a78c-e45e09fa155a req-51be2ec6-6d4e-4130-b845-076fc68b56f8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:14 compute-0 nova_compute[189268]: 2025-11-22 08:34:14.140 189273 DEBUG nova.compute.manager [req-2f46c572-34a4-47e3-a78c-e45e09fa155a req-51be2ec6-6d4e-4130-b845-076fc68b56f8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] No waiting events found dispatching network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:34:14 compute-0 nova_compute[189268]: 2025-11-22 08:34:14.141 189273 WARNING nova.compute.manager [req-2f46c572-34a4-47e3-a78c-e45e09fa155a req-51be2ec6-6d4e-4130-b845-076fc68b56f8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received unexpected event network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 for instance with vm_state active and task_state None.
Nov 22 08:34:14 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 08:34:14 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 08:34:14 compute-0 nova_compute[189268]: 2025-11-22 08:34:14.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:16 compute-0 nova_compute[189268]: 2025-11-22 08:34:16.843 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:19 compute-0 podman[243862]: 2025-11-22 08:34:19.163374518 +0000 UTC m=+0.104883751 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:34:19 compute-0 podman[243863]: 2025-11-22 08:34:19.165480745 +0000 UTC m=+0.107268576 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:34:19 compute-0 nova_compute[189268]: 2025-11-22 08:34:19.976 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:21 compute-0 nova_compute[189268]: 2025-11-22 08:34:21.846 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.091 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.091 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.091 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.092 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e5730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.099 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.102 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:34:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:22.103 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/64e4ab2b-2a08-4c3c-9561-94454cb0b482 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.014 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Sat, 22 Nov 2025 08:34:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f2b78214-87f7-4a80-be15-09e0f4dd7f93 x-openstack-request-id: req-f2b78214-87f7-4a80-be15-09e0f4dd7f93 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.014 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "64e4ab2b-2a08-4c3c-9561-94454cb0b482", "name": "vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf", "status": "ACTIVE", "tenant_id": "80e46844b3824928a6138235e5ede512", "user_id": "27ed1dd009ad4e29863ab5e3a9826c94", "metadata": {"metering.server_group": "209b9e59-811e-4c2b-a756-c29ba92c4b5c"}, "hostId": "984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4", "image": {"id": "de9f57cf-28b4-4cbd-b943-19aa098356bf", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/de9f57cf-28b4-4cbd-b943-19aa098356bf"}]}, "flavor": {"id": "796e25a8-f28d-499e-b2fb-dfae32f0eed7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/796e25a8-f28d-499e-b2fb-dfae32f0eed7"}]}, "created": "2025-11-22T08:34:02Z", "updated": "2025-11-22T08:34:12Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.63", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4d:1a:4a"}, {"version": 4, "addr": "192.168.122.201", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4d:1a:4a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/64e4ab2b-2a08-4c3c-9561-94454cb0b482"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/64e4ab2b-2a08-4c3c-9561-94454cb0b482"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-22T08:34:12.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.014 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/64e4ab2b-2a08-4c3c-9561-94454cb0b482 used request id req-f2b78214-87f7-4a80-be15-09e0f4dd7f93 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.015 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '64e4ab2b-2a08-4c3c-9561-94454cb0b482', 'name': 'vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.019 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435', 'name': 'vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.024 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8349cde-3de3-4359-9fba-8d329cab9476', 'name': 'vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.024 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.025 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.025 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.025 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:34:23.025144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.030 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.034 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 64e4ab2b-2a08-4c3c-9561-94454cb0b482 / tap433ff318-0c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.035 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.043 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.048 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes volume: 8532 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.049 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.049 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.049 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.049 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.050 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.050 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.050 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:34:23.050080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.050 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.051 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.051 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.052 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.052 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.052 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.052 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.052 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.052 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.053 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:34:23.052902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.053 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.054 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.054 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.055 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.055 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.055 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.055 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.055 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.056 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.056 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:34:23.056056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.057 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.057 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.058 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.058 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.059 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.059 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.059 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.059 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.059 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:34:23.059785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.105 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 42300000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.136 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/cpu volume: 10700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.174 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/cpu volume: 33970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 podman[243904]: 2025-11-22 08:34:23.203681652 +0000 UTC m=+0.148717583 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public)
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.219 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/cpu volume: 298020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.220 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.220 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.221 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.221 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.221 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.221 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.221 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.222 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.222 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:34:23.221305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.223 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.223 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.224 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.224 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.224 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.224 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:34:23.224767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 podman[243905]: 2025-11-22 08:34:23.231634151 +0000 UTC m=+0.178591274 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.255 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.256 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.256 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.294 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.295 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.295 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.319 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.319 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.320 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.347 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.348 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.348 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.349 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.349 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.349 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.349 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.349 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:34:23.349873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.428 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.429 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.429 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.502 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.503 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.503 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.591 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.592 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.592 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.673 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.674 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.674 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.675 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.675 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.675 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.675 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.676 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.676 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.676 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.676 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.676 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:34:23.676108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.677 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.677 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.677 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.677 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 22093824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.678 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.678 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.678 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.678 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.679 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.679 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.679 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.679 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.679 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.680 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.680 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.680 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.680 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.680 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.680 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes volume: 7436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.681 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.681 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.681 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.681 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.681 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.681 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:34:23.680056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf>]
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:34:23.681880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.682 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.683 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.683 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:34:23.682899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.683 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.683 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 745725640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.683 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.684 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 3138595 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.684 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 1137897097 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.684 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 138924505 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.684 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 148372768 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.684 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 875417919 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.685 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 107543456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.685 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.latency volume: 90621118 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.685 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.685 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.685 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.685 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.686 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.686 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.686 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:34:23.686196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.686 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.686 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.bytes.delta volume: 1564 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.687 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.687 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.687 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.687 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.687 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.687 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.687 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.688 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.688 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.688 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.688 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.688 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.689 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.689 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.689 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.689 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.689 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.690 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.690 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.690 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:34:23.687955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:34:23.691316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.692 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.692 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.692 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.692 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.692 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.693 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.693 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.693 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.693 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.694 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.694 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.694 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.694 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.695 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.695 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.695 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.695 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.695 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:34:23.695156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.695 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.696 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.696 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.696 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.696 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.697 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.697 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.697 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.697 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.697 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.698 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.698 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.698 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.698 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.698 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.699 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.699 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.699 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:34:23.698977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.699 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.699 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.700 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.700 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.700 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.700 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.701 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.701 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.701 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.701 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.702 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.702 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.702 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.702 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.702 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.702 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.702 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.703 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.703 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.703 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.703 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.704 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.704 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.704 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.704 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.704 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.704 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:34:23.702569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:34:23.704539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.705 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.705 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.705 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.705 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.705 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.706 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 18953900848 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.706 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 13141545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.706 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.706 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 3217141652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.706 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 13984579 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.707 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.707 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.707 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.707 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.708 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.708 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.708 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:34:23.708141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.709 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.710 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.710 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets volume: 58 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.710 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.710 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.710 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.711 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.711 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:34:23.709593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.711 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:34:23.711304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.712 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:34:23.712716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.713 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.713 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.714 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.718 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.718 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.718 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.719 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.719 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.719 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:34:23.719270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.719 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.720 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.720 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.720 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.720 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:34:23.721381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.721 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf>]
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.722 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.723 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/memory.usage volume: 49.12890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.723 15 DEBUG ceilometer.compute.pollsters [-] a8349cde-3de3-4359-9fba-8d329cab9476/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.723 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.724 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.724 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.724 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.724 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.724 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:34:23.722495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.725 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:34:23.726 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:34:24 compute-0 nova_compute[189268]: 2025-11-22 08:34:24.979 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:25 compute-0 nova_compute[189268]: 2025-11-22 08:34:25.921 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:25 compute-0 nova_compute[189268]: 2025-11-22 08:34:25.922 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:34:26 compute-0 nova_compute[189268]: 2025-11-22 08:34:26.452 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:34:26 compute-0 nova_compute[189268]: 2025-11-22 08:34:26.453 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:34:26 compute-0 nova_compute[189268]: 2025-11-22 08:34:26.454 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:34:26 compute-0 nova_compute[189268]: 2025-11-22 08:34:26.847 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:28 compute-0 nova_compute[189268]: 2025-11-22 08:34:28.048 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updating instance_info_cache with network_info: [{"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:34:28 compute-0 nova_compute[189268]: 2025-11-22 08:34:28.071 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:34:28 compute-0 nova_compute[189268]: 2025-11-22 08:34:28.072 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:34:28 compute-0 nova_compute[189268]: 2025-11-22 08:34:28.074 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:29 compute-0 nova_compute[189268]: 2025-11-22 08:34:29.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:29 compute-0 nova_compute[189268]: 2025-11-22 08:34:29.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:29 compute-0 nova_compute[189268]: 2025-11-22 08:34:29.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:34:29 compute-0 podman[243947]: 2025-11-22 08:34:29.203820599 +0000 UTC m=+0.154837130 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc.)
Nov 22 08:34:29 compute-0 podman[203476]: time="2025-11-22T08:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:34:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:34:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 22 08:34:29 compute-0 nova_compute[189268]: 2025-11-22 08:34:29.983 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:30 compute-0 nova_compute[189268]: 2025-11-22 08:34:30.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:31 compute-0 nova_compute[189268]: 2025-11-22 08:34:31.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:31 compute-0 podman[243968]: 2025-11-22 08:34:31.20224996 +0000 UTC m=+0.137922620 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:34:31 compute-0 openstack_network_exporter[205661]: ERROR   08:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:34:31 compute-0 openstack_network_exporter[205661]: ERROR   08:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:34:31 compute-0 openstack_network_exporter[205661]: ERROR   08:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:34:31 compute-0 openstack_network_exporter[205661]: ERROR   08:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:34:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:34:31 compute-0 openstack_network_exporter[205661]: ERROR   08:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:34:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:34:31 compute-0 nova_compute[189268]: 2025-11-22 08:34:31.850 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:34 compute-0 nova_compute[189268]: 2025-11-22 08:34:34.987 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:36 compute-0 nova_compute[189268]: 2025-11-22 08:34:36.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:36 compute-0 nova_compute[189268]: 2025-11-22 08:34:36.853 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.132 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.133 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.134 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.276 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.359 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.360 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.417 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.423 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.509 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.511 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.578 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.588 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.655 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.658 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.735 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.748 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.831 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.833 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.898 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.906 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.980 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:37 compute-0 nova_compute[189268]: 2025-11-22 08:34:37.984 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.062 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.064 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.137 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.139 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.230 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.240 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.323 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.337 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.414 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.416 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.513 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.514 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:34:38 compute-0 nova_compute[189268]: 2025-11-22 08:34:38.592 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.018 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.019 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4755MB free_disk=72.45946502685547GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.020 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.021 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.256 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.257 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.258 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.258 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.259 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.259 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.339 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.363 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.503 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.504 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:34:39 compute-0 nova_compute[189268]: 2025-11-22 08:34:39.991 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:40 compute-0 nova_compute[189268]: 2025-11-22 08:34:40.507 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:34:41 compute-0 ovn_controller[97783]: 2025-11-22T08:34:41Z|00056|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 22 08:34:41 compute-0 nova_compute[189268]: 2025-11-22 08:34:41.855 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:42 compute-0 podman[244042]: 2025-11-22 08:34:42.146433141 +0000 UTC m=+0.088583119 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:34:42 compute-0 podman[244043]: 2025-11-22 08:34:42.162042896 +0000 UTC m=+0.100959606 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:34:42 compute-0 podman[244044]: 2025-11-22 08:34:42.175584173 +0000 UTC m=+0.108394946 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:34:44 compute-0 nova_compute[189268]: 2025-11-22 08:34:44.995 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:46 compute-0 nova_compute[189268]: 2025-11-22 08:34:46.857 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:49 compute-0 nova_compute[189268]: 2025-11-22 08:34:49.999 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:50 compute-0 podman[244119]: 2025-11-22 08:34:50.158595988 +0000 UTC m=+0.088669521 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:34:50 compute-0 podman[244118]: 2025-11-22 08:34:50.17371592 +0000 UTC m=+0.103241167 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 22 08:34:51 compute-0 ovn_controller[97783]: 2025-11-22T08:34:51Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:1a:4a 192.168.0.63
Nov 22 08:34:51 compute-0 ovn_controller[97783]: 2025-11-22T08:34:51Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:1a:4a 192.168.0.63
Nov 22 08:34:51 compute-0 nova_compute[189268]: 2025-11-22 08:34:51.861 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:54 compute-0 podman[244155]: 2025-11-22 08:34:54.141040929 +0000 UTC m=+0.094930551 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, version=9.4, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Nov 22 08:34:54 compute-0 podman[244156]: 2025-11-22 08:34:54.1756445 +0000 UTC m=+0.127394443 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:34:55 compute-0 nova_compute[189268]: 2025-11-22 08:34:55.003 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:56 compute-0 nova_compute[189268]: 2025-11-22 08:34:56.861 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:34:59 compute-0 podman[203476]: time="2025-11-22T08:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:34:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:34:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 22 08:35:00 compute-0 nova_compute[189268]: 2025-11-22 08:35:00.007 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:00 compute-0 podman[244200]: 2025-11-22 08:35:00.13653664 +0000 UTC m=+0.087755506 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Nov 22 08:35:01 compute-0 openstack_network_exporter[205661]: ERROR   08:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:35:01 compute-0 openstack_network_exporter[205661]: ERROR   08:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:35:01 compute-0 openstack_network_exporter[205661]: ERROR   08:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:35:01 compute-0 openstack_network_exporter[205661]: ERROR   08:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:35:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:35:01 compute-0 openstack_network_exporter[205661]: ERROR   08:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:35:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:35:01 compute-0 nova_compute[189268]: 2025-11-22 08:35:01.864 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:02 compute-0 podman[244220]: 2025-11-22 08:35:02.124079246 +0000 UTC m=+0.068765520 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:35:05 compute-0 nova_compute[189268]: 2025-11-22 08:35:05.013 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:06 compute-0 nova_compute[189268]: 2025-11-22 08:35:06.868 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:09.971 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:09.972 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:09.972 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:10 compute-0 nova_compute[189268]: 2025-11-22 08:35:10.017 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:11 compute-0 nova_compute[189268]: 2025-11-22 08:35:11.871 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:13 compute-0 podman[244245]: 2025-11-22 08:35:13.139783372 +0000 UTC m=+0.073260404 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:35:13 compute-0 podman[244246]: 2025-11-22 08:35:13.169369733 +0000 UTC m=+0.101136599 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:35:13 compute-0 podman[244244]: 2025-11-22 08:35:13.172993202 +0000 UTC m=+0.108103168 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 08:35:15 compute-0 nova_compute[189268]: 2025-11-22 08:35:15.020 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:16 compute-0 nova_compute[189268]: 2025-11-22 08:35:16.873 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:20 compute-0 nova_compute[189268]: 2025-11-22 08:35:20.025 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:21 compute-0 podman[244305]: 2025-11-22 08:35:21.184739481 +0000 UTC m=+0.114096490 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:35:21 compute-0 podman[244306]: 2025-11-22 08:35:21.203261663 +0000 UTC m=+0.127561685 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:35:21 compute-0 nova_compute[189268]: 2025-11-22 08:35:21.876 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:24 compute-0 nova_compute[189268]: 2025-11-22 08:35:24.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:24 compute-0 nova_compute[189268]: 2025-11-22 08:35:24.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:35:24 compute-0 nova_compute[189268]: 2025-11-22 08:35:24.102 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:35:24 compute-0 nova_compute[189268]: 2025-11-22 08:35:24.337 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:35:24 compute-0 nova_compute[189268]: 2025-11-22 08:35:24.337 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:35:24 compute-0 nova_compute[189268]: 2025-11-22 08:35:24.338 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:35:24 compute-0 nova_compute[189268]: 2025-11-22 08:35:24.338 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:35:25 compute-0 nova_compute[189268]: 2025-11-22 08:35:25.029 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:25 compute-0 podman[244342]: 2025-11-22 08:35:25.181120453 +0000 UTC m=+0.104045028 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git)
Nov 22 08:35:25 compute-0 podman[244343]: 2025-11-22 08:35:25.219700158 +0000 UTC m=+0.136081965 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:35:25 compute-0 nova_compute[189268]: 2025-11-22 08:35:25.395 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:35:25 compute-0 nova_compute[189268]: 2025-11-22 08:35:25.418 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:35:25 compute-0 nova_compute[189268]: 2025-11-22 08:35:25.419 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:35:26 compute-0 nova_compute[189268]: 2025-11-22 08:35:26.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:26 compute-0 nova_compute[189268]: 2025-11-22 08:35:26.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:35:26 compute-0 nova_compute[189268]: 2025-11-22 08:35:26.110 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:35:26 compute-0 nova_compute[189268]: 2025-11-22 08:35:26.879 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:27 compute-0 nova_compute[189268]: 2025-11-22 08:35:27.111 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:29 compute-0 podman[203476]: time="2025-11-22T08:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:35:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:35:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 22 08:35:30 compute-0 nova_compute[189268]: 2025-11-22 08:35:30.033 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:31 compute-0 nova_compute[189268]: 2025-11-22 08:35:31.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:31 compute-0 nova_compute[189268]: 2025-11-22 08:35:31.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:31 compute-0 nova_compute[189268]: 2025-11-22 08:35:31.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:31 compute-0 nova_compute[189268]: 2025-11-22 08:35:31.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:35:31 compute-0 nova_compute[189268]: 2025-11-22 08:35:31.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:31 compute-0 nova_compute[189268]: 2025-11-22 08:35:31.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:35:31 compute-0 podman[244387]: 2025-11-22 08:35:31.193790766 +0000 UTC m=+0.132970462 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, architecture=x86_64, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Nov 22 08:35:31 compute-0 openstack_network_exporter[205661]: ERROR   08:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:35:31 compute-0 openstack_network_exporter[205661]: ERROR   08:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:35:31 compute-0 openstack_network_exporter[205661]: ERROR   08:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:35:31 compute-0 openstack_network_exporter[205661]: ERROR   08:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:35:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:35:31 compute-0 openstack_network_exporter[205661]: ERROR   08:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:35:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:35:31 compute-0 nova_compute[189268]: 2025-11-22 08:35:31.881 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:33 compute-0 nova_compute[189268]: 2025-11-22 08:35:33.109 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:33 compute-0 podman[244407]: 2025-11-22 08:35:33.151950152 +0000 UTC m=+0.095689402 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:35:35 compute-0 nova_compute[189268]: 2025-11-22 08:35:35.037 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:35 compute-0 nova_compute[189268]: 2025-11-22 08:35:35.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:36 compute-0 nova_compute[189268]: 2025-11-22 08:35:36.884 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.128 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.244 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.312 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.314 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.395 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.397 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.478 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.481 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.557 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.569 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.645 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.647 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.716 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.717 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.786 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.787 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.875 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.889 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.956 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:37 compute-0 nova_compute[189268]: 2025-11-22 08:35:37.958 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.036 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.038 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.103 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.104 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.173 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.186 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.255 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.257 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.323 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.325 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.415 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.417 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:35:38 compute-0 nova_compute[189268]: 2025-11-22 08:35:38.522 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.005 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.008 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4658MB free_disk=72.43790817260742GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.008 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.009 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.222 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.223 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a8349cde-3de3-4359-9fba-8d329cab9476 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.223 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.223 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.223 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.223 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.300 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.367 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.367 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.384 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.407 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.492 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.510 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.511 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:35:39 compute-0 nova_compute[189268]: 2025-11-22 08:35:39.511 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:40 compute-0 nova_compute[189268]: 2025-11-22 08:35:40.040 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:41 compute-0 nova_compute[189268]: 2025-11-22 08:35:41.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:41 compute-0 nova_compute[189268]: 2025-11-22 08:35:41.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:41 compute-0 nova_compute[189268]: 2025-11-22 08:35:41.887 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:44 compute-0 podman[244479]: 2025-11-22 08:35:44.144162268 +0000 UTC m=+0.086648467 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:35:44 compute-0 podman[244480]: 2025-11-22 08:35:44.149822492 +0000 UTC m=+0.086907645 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:35:44 compute-0 podman[244481]: 2025-11-22 08:35:44.154035566 +0000 UTC m=+0.088888748 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.367 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.399 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.399 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid a8349cde-3de3-4359-9fba-8d329cab9476 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.400 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.400 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid 64e4ab2b-2a08-4c3c-9561-94454cb0b482 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.400 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.401 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.401 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.405 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "a8349cde-3de3-4359-9fba-8d329cab9476" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.405 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.406 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.406 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.406 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.465 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.468 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "a8349cde-3de3-4359-9fba-8d329cab9476" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.470 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:44 compute-0 nova_compute[189268]: 2025-11-22 08:35:44.486 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:45 compute-0 nova_compute[189268]: 2025-11-22 08:35:45.044 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:46 compute-0 nova_compute[189268]: 2025-11-22 08:35:46.893 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:50 compute-0 nova_compute[189268]: 2025-11-22 08:35:50.049 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.607 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.607 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.608 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.608 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.609 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.612 189273 INFO nova.compute.manager [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Terminating instance
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.614 189273 DEBUG nova.compute.manager [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:35:51 compute-0 kernel: tapc99bd243-11 (unregistering): left promiscuous mode
Nov 22 08:35:51 compute-0 NetworkManager[56326]: <info>  [1763800551.6660] device (tapc99bd243-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:35:51 compute-0 ovn_controller[97783]: 2025-11-22T08:35:51Z|00057|binding|INFO|Releasing lport c99bd243-1114-4104-8d75-dd481789f958 from this chassis (sb_readonly=0)
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.673 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 ovn_controller[97783]: 2025-11-22T08:35:51Z|00058|binding|INFO|Setting lport c99bd243-1114-4104-8d75-dd481789f958 down in Southbound
Nov 22 08:35:51 compute-0 ovn_controller[97783]: 2025-11-22T08:35:51Z|00059|binding|INFO|Removing iface tapc99bd243-11 ovn-installed in OVS
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.677 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.685 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:fd:a4 192.168.0.99'], port_security=['fa:16:3e:2a:fd:a4 192.168.0.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-whvy4btuikeu-vmbwmtq4hym4-port-ql5olvunn5or', 'neutron:cidrs': '192.168.0.99/24', 'neutron:device_id': 'a8349cde-3de3-4359-9fba-8d329cab9476', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-whvy4btuikeu-vmbwmtq4hym4-port-ql5olvunn5or', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=c99bd243-1114-4104-8d75-dd481789f958) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.687 106642 INFO neutron.agent.ovn.metadata.agent [-] Port c99bd243-1114-4104-8d75-dd481789f958 in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 unbound from our chassis
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.688 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.689 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.710 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2f364a-600c-4926-96ea-48e3e1d95a3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:35:51 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 22 08:35:51 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 2.619s CPU time.
Nov 22 08:35:51 compute-0 systemd-machined[155703]: Machine qemu-2-instance-00000002 terminated.
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.751 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[fd31ceea-a00c-416a-abe9-8d28d4df96b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.755 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3e0006-afc1-4a0e-ab4b-13f3829f37fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.786 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[56581fb6-cb2e-4417-a075-0569133bba52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.815 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a1a584-63ab-458c-8fff-b7c2b8565132]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 39670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244583, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:35:51 compute-0 podman[244540]: 2025-11-22 08:35:51.82318682 +0000 UTC m=+0.123159366 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:35:51 compute-0 podman[244537]: 2025-11-22 08:35:51.827034665 +0000 UTC m=+0.123708491 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.839 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[52a61547-2ad6-418f-b739-679f06d5d426]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244584, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244584, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.842 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.844 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.857 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.859 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.860 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.862 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:35:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:51.862 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.894 189273 DEBUG nova.compute.manager [req-a59f31dd-84a9-4f5e-8dc0-658e86c4d4e6 req-cf85b615-5169-4219-96c3-ffa1a8014bdf 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received event network-vif-unplugged-c99bd243-1114-4104-8d75-dd481789f958 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.895 189273 DEBUG oslo_concurrency.lockutils [req-a59f31dd-84a9-4f5e-8dc0-658e86c4d4e6 req-cf85b615-5169-4219-96c3-ffa1a8014bdf 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.895 189273 DEBUG oslo_concurrency.lockutils [req-a59f31dd-84a9-4f5e-8dc0-658e86c4d4e6 req-cf85b615-5169-4219-96c3-ffa1a8014bdf 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.895 189273 DEBUG oslo_concurrency.lockutils [req-a59f31dd-84a9-4f5e-8dc0-658e86c4d4e6 req-cf85b615-5169-4219-96c3-ffa1a8014bdf 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.895 189273 DEBUG nova.compute.manager [req-a59f31dd-84a9-4f5e-8dc0-658e86c4d4e6 req-cf85b615-5169-4219-96c3-ffa1a8014bdf 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] No waiting events found dispatching network-vif-unplugged-c99bd243-1114-4104-8d75-dd481789f958 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.896 189273 DEBUG nova.compute.manager [req-a59f31dd-84a9-4f5e-8dc0-658e86c4d4e6 req-cf85b615-5169-4219-96c3-ffa1a8014bdf 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received event network-vif-unplugged-c99bd243-1114-4104-8d75-dd481789f958 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.897 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.925 189273 INFO nova.virt.libvirt.driver [-] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Instance destroyed successfully.
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.926 189273 DEBUG nova.objects.instance [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'resources' on Instance uuid a8349cde-3de3-4359-9fba-8d329cab9476 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.937 189273 DEBUG nova.virt.libvirt.vif [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-whvy4btuikeu-vmbwmtq4hym4-vnf-rixlnkr2j72q',id=2,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:26:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-oztih3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:26:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDgwMTA0NzU2OTU4MTEwNzg3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 22 08:35:51 compute-0 nova_compute[189268]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDgwMTA0NzU2OTU4MTEwNzg3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA4MDEwNDc1Njk1ODExMDc4NzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wODAxMDQ3NTY5NTgxMTA3ODc2PT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=a8349cde-3de3-4359-9fba-8d329cab9476,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.937 189273 DEBUG nova.network.os_vif_util [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.938 189273 DEBUG nova.network.os_vif_util [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:fd:a4,bridge_name='br-int',has_traffic_filtering=True,id=c99bd243-1114-4104-8d75-dd481789f958,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc99bd243-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.938 189273 DEBUG os_vif [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:fd:a4,bridge_name='br-int',has_traffic_filtering=True,id=c99bd243-1114-4104-8d75-dd481789f958,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc99bd243-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.941 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.942 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc99bd243-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.945 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.949 189273 INFO os_vif [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:fd:a4,bridge_name='br-int',has_traffic_filtering=True,id=c99bd243-1114-4104-8d75-dd481789f958,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc99bd243-11')
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.950 189273 INFO nova.virt.libvirt.driver [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Deleting instance files /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476_del
Nov 22 08:35:51 compute-0 nova_compute[189268]: 2025-11-22 08:35:51.951 189273 INFO nova.virt.libvirt.driver [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Deletion of /var/lib/nova/instances/a8349cde-3de3-4359-9fba-8d329cab9476_del complete
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.005 189273 INFO nova.compute.manager [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.005 189273 DEBUG oslo.service.loopingcall [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.006 189273 DEBUG nova.compute.manager [-] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.006 189273 DEBUG nova.network.neutron [-] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:35:52 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:35:51.937 189273 DEBUG nova.virt.libvirt.vif [None req-a8a1ea99-a3 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:35:52 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:52.322 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.324 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:52 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:35:52.324 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.882 189273 DEBUG nova.compute.manager [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received event network-changed-c99bd243-1114-4104-8d75-dd481789f958 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.883 189273 DEBUG nova.compute.manager [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Refreshing instance network info cache due to event network-changed-c99bd243-1114-4104-8d75-dd481789f958. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.883 189273 DEBUG oslo_concurrency.lockutils [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.883 189273 DEBUG oslo_concurrency.lockutils [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:35:52 compute-0 nova_compute[189268]: 2025-11-22 08:35:52.883 189273 DEBUG nova.network.neutron [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Refreshing network info cache for port c99bd243-1114-4104-8d75-dd481789f958 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.048 189273 DEBUG nova.compute.manager [req-aa03a032-b41b-4488-aee7-d2d418257e95 req-ca271607-2cb6-406a-b80f-6ce601016899 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received event network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.049 189273 DEBUG oslo_concurrency.lockutils [req-aa03a032-b41b-4488-aee7-d2d418257e95 req-ca271607-2cb6-406a-b80f-6ce601016899 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.049 189273 DEBUG oslo_concurrency.lockutils [req-aa03a032-b41b-4488-aee7-d2d418257e95 req-ca271607-2cb6-406a-b80f-6ce601016899 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.050 189273 DEBUG oslo_concurrency.lockutils [req-aa03a032-b41b-4488-aee7-d2d418257e95 req-ca271607-2cb6-406a-b80f-6ce601016899 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.050 189273 DEBUG nova.compute.manager [req-aa03a032-b41b-4488-aee7-d2d418257e95 req-ca271607-2cb6-406a-b80f-6ce601016899 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] No waiting events found dispatching network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.050 189273 WARNING nova.compute.manager [req-aa03a032-b41b-4488-aee7-d2d418257e95 req-ca271607-2cb6-406a-b80f-6ce601016899 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Received unexpected event network-vif-plugged-c99bd243-1114-4104-8d75-dd481789f958 for instance with vm_state active and task_state deleting.
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.052 189273 DEBUG nova.network.neutron [-] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.075 189273 INFO nova.compute.manager [-] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Took 2.07 seconds to deallocate network for instance.
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.130 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.132 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.256 189273 DEBUG nova.compute.provider_tree [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.274 189273 DEBUG nova.scheduler.client.report [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.297 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.323 189273 INFO nova.scheduler.client.report [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Deleted allocations for instance a8349cde-3de3-4359-9fba-8d329cab9476
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.384 189273 DEBUG oslo_concurrency.lockutils [None req-a8a1ea99-a337-483e-a563-9cbef646dea7 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "a8349cde-3de3-4359-9fba-8d329cab9476" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.403 189273 DEBUG nova.network.neutron [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updated VIF entry in instance network info cache for port c99bd243-1114-4104-8d75-dd481789f958. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.404 189273 DEBUG nova.network.neutron [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Updating instance_info_cache with network_info: [{"id": "c99bd243-1114-4104-8d75-dd481789f958", "address": "fa:16:3e:2a:fd:a4", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc99bd243-11", "ovs_interfaceid": "c99bd243-1114-4104-8d75-dd481789f958", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:35:54 compute-0 nova_compute[189268]: 2025-11-22 08:35:54.419 189273 DEBUG oslo_concurrency.lockutils [req-b14478f5-9330-48b7-ae05-04d10579253b req-a051a6a8-39b7-4061-9113-6e2ca5bee682 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-a8349cde-3de3-4359-9fba-8d329cab9476" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:35:56 compute-0 podman[244617]: 2025-11-22 08:35:56.149494317 +0000 UTC m=+0.096397452 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc.)
Nov 22 08:35:56 compute-0 podman[244618]: 2025-11-22 08:35:56.191058363 +0000 UTC m=+0.136475827 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 08:35:56 compute-0 nova_compute[189268]: 2025-11-22 08:35:56.899 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:56 compute-0 nova_compute[189268]: 2025-11-22 08:35:56.944 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:35:59 compute-0 podman[203476]: time="2025-11-22T08:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:35:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:35:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 22 08:36:01 compute-0 openstack_network_exporter[205661]: ERROR   08:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:36:01 compute-0 openstack_network_exporter[205661]: ERROR   08:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:36:01 compute-0 openstack_network_exporter[205661]: ERROR   08:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:36:01 compute-0 openstack_network_exporter[205661]: ERROR   08:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:36:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:36:01 compute-0 openstack_network_exporter[205661]: ERROR   08:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:36:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:36:02 compute-0 nova_compute[189268]: 2025-11-22 08:36:02.231 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:02 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:36:02.327 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:36:02 compute-0 podman[244661]: 2025-11-22 08:36:02.388309086 +0000 UTC m=+0.120092353 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, vcs-type=git)
Nov 22 08:36:04 compute-0 podman[244682]: 2025-11-22 08:36:04.125132759 +0000 UTC m=+0.072094523 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:36:06 compute-0 nova_compute[189268]: 2025-11-22 08:36:06.904 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:06 compute-0 nova_compute[189268]: 2025-11-22 08:36:06.919 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763800551.9178455, a8349cde-3de3-4359-9fba-8d329cab9476 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:36:06 compute-0 nova_compute[189268]: 2025-11-22 08:36:06.920 189273 INFO nova.compute.manager [-] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] VM Stopped (Lifecycle Event)
Nov 22 08:36:06 compute-0 nova_compute[189268]: 2025-11-22 08:36:06.939 189273 DEBUG nova.compute.manager [None req-da42f429-d31f-4fad-a38c-34aa168208fa - - - - - -] [instance: a8349cde-3de3-4359-9fba-8d329cab9476] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:36:07 compute-0 nova_compute[189268]: 2025-11-22 08:36:07.234 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:36:09.973 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:36:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:36:09.974 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:36:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:36:09.974 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:36:11 compute-0 nova_compute[189268]: 2025-11-22 08:36:11.907 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:12 compute-0 nova_compute[189268]: 2025-11-22 08:36:12.237 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 08:36:14 compute-0 podman[244707]: 2025-11-22 08:36:14.408192493 +0000 UTC m=+0.086578796 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 08:36:14 compute-0 podman[244709]: 2025-11-22 08:36:14.430003943 +0000 UTC m=+0.101159500 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 08:36:14 compute-0 podman[244708]: 2025-11-22 08:36:14.433688013 +0000 UTC m=+0.111299644 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:36:16 compute-0 nova_compute[189268]: 2025-11-22 08:36:16.909 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:17 compute-0 nova_compute[189268]: 2025-11-22 08:36:17.239 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:21 compute-0 nova_compute[189268]: 2025-11-22 08:36:21.913 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.092 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.093 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.104 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.108 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '64e4ab2b-2a08-4c3c-9561-94454cb0b482', 'name': 'vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.111 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435', 'name': 'vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.111 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.112 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.112 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.112 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:36:22.112193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.118 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 2388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.124 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.128 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.bytes volume: 1738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.129 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.129 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.129 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.129 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.130 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.130 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.130 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.130 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.130 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.131 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.131 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.131 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.131 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.131 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.131 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.132 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.132 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.132 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.133 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.133 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.133 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.134 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.134 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.134 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.134 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:36:22.130164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:36:22.131961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.135 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.135 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.135 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.136 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.136 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:36:22.134197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:36:22.136098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 podman[244767]: 2025-11-22 08:36:22.166551947 +0000 UTC m=+0.104546823 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.174 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 43820000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 podman[244768]: 2025-11-22 08:36:22.185055078 +0000 UTC m=+0.118298224 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible)
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.208 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/cpu volume: 37490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.234 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/cpu volume: 35470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.235 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.235 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.235 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.236 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.236 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.236 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.236 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.236 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.237 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:36:22.236200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.237 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.237 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.237 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.237 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.238 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.238 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:36:22.238111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 nova_compute[189268]: 2025-11-22 08:36:22.244 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.268 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.269 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.269 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.295 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.296 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.296 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.334 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.335 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.336 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.337 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.338 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.338 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.338 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.339 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.339 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:36:22.339358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.439 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.439 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.440 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.521 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.521 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.522 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.596 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.597 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.597 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.598 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.598 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.598 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.598 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.598 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.598 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.598 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.599 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.599 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:36:22.598740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.599 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.600 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.600 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.600 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 22093824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.600 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.600 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.601 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.601 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.601 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.601 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.601 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.601 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.602 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:36:22.601736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.602 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.602 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.602 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.602 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.603 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:36:22.603367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.604 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.604 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 1133591681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.604 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 382437315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.604 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 288491761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.605 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 1137897097 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.605 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 138924505 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.605 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.latency volume: 148372768 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.605 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.605 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.606 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.606 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.606 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.606 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.606 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.606 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes.delta volume: 1480 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:36:22.606223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.607 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.607 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.607 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.607 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.607 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.607 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.607 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.608 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.608 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.608 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.608 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:36:22.607828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.608 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.609 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.609 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.609 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.609 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:36:22.610680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.611 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.611 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.611 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.611 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.612 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.612 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.612 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.612 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.613 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.613 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.613 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.613 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.613 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.613 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.613 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.614 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.614 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.614 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.614 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.615 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:36:22.613643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.615 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.615 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.615 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.616 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.616 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.616 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.616 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.616 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.616 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.616 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.617 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:36:22.616682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.617 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.617 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.617 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.618 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.618 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.618 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.618 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.619 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.620 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.620 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.620 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:36:22.619581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.621 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.622 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 57392898403 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:36:22.621232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.622 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 229562299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.622 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.623 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 18953900848 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.623 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 13141545 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.623 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.623 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.624 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.624 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.624 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.624 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.624 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:36:22.624381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.625 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.625 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.625 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.625 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.625 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.625 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.625 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.626 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.626 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.626 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.627 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.627 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.627 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.627 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:36:22.625665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.627 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.628 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.628 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.628 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.628 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.628 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:36:22.627695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.629 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.629 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.629 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.629 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:36:22.628992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:36:22.630493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.631 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.631 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.631 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.631 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.631 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.631 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.631 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.632 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.632 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.632 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.632 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:36:22.632097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.632 15 DEBUG ceilometer.compute.pollsters [-] cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/memory.usage volume: 49.12890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.633 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.633 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.634 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.635 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.636 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.636 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.636 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.636 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.636 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.636 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:36:22.636 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:36:25 compute-0 ovn_controller[97783]: 2025-11-22T08:36:25Z|00060|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 22 08:36:26 compute-0 nova_compute[189268]: 2025-11-22 08:36:26.138 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:26 compute-0 nova_compute[189268]: 2025-11-22 08:36:26.139 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:36:26 compute-0 nova_compute[189268]: 2025-11-22 08:36:26.916 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:27 compute-0 nova_compute[189268]: 2025-11-22 08:36:27.007 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:36:27 compute-0 nova_compute[189268]: 2025-11-22 08:36:27.007 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:36:27 compute-0 nova_compute[189268]: 2025-11-22 08:36:27.008 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:36:27 compute-0 podman[244806]: 2025-11-22 08:36:27.15216524 +0000 UTC m=+0.105473108 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, version=9.4, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 22 08:36:27 compute-0 podman[244807]: 2025-11-22 08:36:27.17285412 +0000 UTC m=+0.123664320 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 08:36:27 compute-0 nova_compute[189268]: 2025-11-22 08:36:27.245 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:28 compute-0 nova_compute[189268]: 2025-11-22 08:36:28.487 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updating instance_info_cache with network_info: [{"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:36:28 compute-0 nova_compute[189268]: 2025-11-22 08:36:28.506 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:36:28 compute-0 nova_compute[189268]: 2025-11-22 08:36:28.506 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:36:29 compute-0 nova_compute[189268]: 2025-11-22 08:36:29.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:29 compute-0 podman[203476]: time="2025-11-22T08:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:36:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:36:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 22 08:36:31 compute-0 openstack_network_exporter[205661]: ERROR   08:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:36:31 compute-0 openstack_network_exporter[205661]: ERROR   08:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:36:31 compute-0 openstack_network_exporter[205661]: ERROR   08:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:36:31 compute-0 openstack_network_exporter[205661]: ERROR   08:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:36:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:36:31 compute-0 openstack_network_exporter[205661]: ERROR   08:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:36:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:36:31 compute-0 nova_compute[189268]: 2025-11-22 08:36:31.917 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:32 compute-0 nova_compute[189268]: 2025-11-22 08:36:32.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:32 compute-0 nova_compute[189268]: 2025-11-22 08:36:32.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:36:32 compute-0 nova_compute[189268]: 2025-11-22 08:36:32.249 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:33 compute-0 nova_compute[189268]: 2025-11-22 08:36:33.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:33 compute-0 nova_compute[189268]: 2025-11-22 08:36:33.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:33 compute-0 podman[244852]: 2025-11-22 08:36:33.146361975 +0000 UTC m=+0.089933316 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 22 08:36:34 compute-0 nova_compute[189268]: 2025-11-22 08:36:34.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:35 compute-0 podman[244872]: 2025-11-22 08:36:35.121209344 +0000 UTC m=+0.076046470 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:36:36 compute-0 nova_compute[189268]: 2025-11-22 08:36:36.919 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:37 compute-0 nova_compute[189268]: 2025-11-22 08:36:37.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:37 compute-0 nova_compute[189268]: 2025-11-22 08:36:37.252 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.125 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.231 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.300 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.302 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.378 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.379 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.455 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.456 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.535 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.545 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.616 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.618 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.686 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.688 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.766 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.768 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.838 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.845 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.915 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.917 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.995 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:38 compute-0 nova_compute[189268]: 2025-11-22 08:36:38.997 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.087 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.089 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.171 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.618 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.620 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4809MB free_disk=72.46041107177734GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.620 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.621 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.713 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.714 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.714 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.714 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.714 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.808 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.822 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.858 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:36:39 compute-0 nova_compute[189268]: 2025-11-22 08:36:39.859 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:36:41 compute-0 nova_compute[189268]: 2025-11-22 08:36:41.921 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:42 compute-0 nova_compute[189268]: 2025-11-22 08:36:42.256 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:42 compute-0 nova_compute[189268]: 2025-11-22 08:36:42.859 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:36:44 compute-0 podman[244933]: 2025-11-22 08:36:44.800624914 +0000 UTC m=+0.079099054 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:36:44 compute-0 podman[244938]: 2025-11-22 08:36:44.81232145 +0000 UTC m=+0.080133401 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:36:44 compute-0 podman[244932]: 2025-11-22 08:36:44.818888848 +0000 UTC m=+0.105060696 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 08:36:46 compute-0 nova_compute[189268]: 2025-11-22 08:36:46.923 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:47 compute-0 nova_compute[189268]: 2025-11-22 08:36:47.258 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:51 compute-0 nova_compute[189268]: 2025-11-22 08:36:51.926 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:52 compute-0 nova_compute[189268]: 2025-11-22 08:36:52.261 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:53 compute-0 podman[244990]: 2025-11-22 08:36:53.15465499 +0000 UTC m=+0.109791644 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:36:53 compute-0 podman[244991]: 2025-11-22 08:36:53.181504217 +0000 UTC m=+0.122590940 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:36:56 compute-0 nova_compute[189268]: 2025-11-22 08:36:56.928 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:57 compute-0 nova_compute[189268]: 2025-11-22 08:36:57.264 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:36:58 compute-0 podman[245027]: 2025-11-22 08:36:58.157263239 +0000 UTC m=+0.096770702 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.expose-services=, name=ubi9)
Nov 22 08:36:58 compute-0 podman[245028]: 2025-11-22 08:36:58.220880861 +0000 UTC m=+0.152964283 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:36:59 compute-0 podman[203476]: time="2025-11-22T08:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:36:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:36:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 22 08:37:01 compute-0 openstack_network_exporter[205661]: ERROR   08:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:37:01 compute-0 openstack_network_exporter[205661]: ERROR   08:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:37:01 compute-0 openstack_network_exporter[205661]: ERROR   08:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:37:01 compute-0 openstack_network_exporter[205661]: ERROR   08:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:37:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:37:01 compute-0 openstack_network_exporter[205661]: ERROR   08:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:37:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:37:01 compute-0 nova_compute[189268]: 2025-11-22 08:37:01.932 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:02 compute-0 nova_compute[189268]: 2025-11-22 08:37:02.267 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:04 compute-0 podman[245071]: 2025-11-22 08:37:04.140174386 +0000 UTC m=+0.087473889 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, architecture=x86_64)
Nov 22 08:37:04 compute-0 sshd-session[245069]: Invalid user hadoop from 80.94.92.164 port 57620
Nov 22 08:37:05 compute-0 sshd-session[245069]: Connection closed by invalid user hadoop 80.94.92.164 port 57620 [preauth]
Nov 22 08:37:06 compute-0 podman[245091]: 2025-11-22 08:37:06.187158131 +0000 UTC m=+0.139124268 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:37:06 compute-0 nova_compute[189268]: 2025-11-22 08:37:06.936 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:07 compute-0 nova_compute[189268]: 2025-11-22 08:37:07.271 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:09.974 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:09.975 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:09.976 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:11 compute-0 sshd-session[245114]: banner exchange: Connection from 195.88.120.62 port 43539: invalid format
Nov 22 08:37:11 compute-0 nova_compute[189268]: 2025-11-22 08:37:11.939 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:12 compute-0 nova_compute[189268]: 2025-11-22 08:37:12.273 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:15 compute-0 podman[245116]: 2025-11-22 08:37:15.149897917 +0000 UTC m=+0.090343407 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:37:15 compute-0 podman[245115]: 2025-11-22 08:37:15.153037103 +0000 UTC m=+0.099230908 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:37:15 compute-0 podman[245117]: 2025-11-22 08:37:15.157286228 +0000 UTC m=+0.081248381 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:37:16 compute-0 nova_compute[189268]: 2025-11-22 08:37:16.942 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:17 compute-0 nova_compute[189268]: 2025-11-22 08:37:17.275 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:21 compute-0 nova_compute[189268]: 2025-11-22 08:37:21.946 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:22 compute-0 nova_compute[189268]: 2025-11-22 08:37:22.277 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:24 compute-0 podman[245173]: 2025-11-22 08:37:24.135887497 +0000 UTC m=+0.078548678 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:37:24 compute-0 podman[245172]: 2025-11-22 08:37:24.177009491 +0000 UTC m=+0.112604121 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 08:37:26 compute-0 nova_compute[189268]: 2025-11-22 08:37:26.947 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:27 compute-0 nova_compute[189268]: 2025-11-22 08:37:27.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:27 compute-0 nova_compute[189268]: 2025-11-22 08:37:27.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:37:27 compute-0 nova_compute[189268]: 2025-11-22 08:37:27.280 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:28 compute-0 nova_compute[189268]: 2025-11-22 08:37:28.016 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:37:28 compute-0 nova_compute[189268]: 2025-11-22 08:37:28.018 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:37:28 compute-0 nova_compute[189268]: 2025-11-22 08:37:28.018 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:37:29 compute-0 podman[245208]: 2025-11-22 08:37:29.150194064 +0000 UTC m=+0.098916510 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, config_id=edpm, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30)
Nov 22 08:37:29 compute-0 podman[245209]: 2025-11-22 08:37:29.213152008 +0000 UTC m=+0.147657579 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:37:29 compute-0 nova_compute[189268]: 2025-11-22 08:37:29.507 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updating instance_info_cache with network_info: [{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:37:29 compute-0 nova_compute[189268]: 2025-11-22 08:37:29.614 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:37:29 compute-0 nova_compute[189268]: 2025-11-22 08:37:29.615 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:37:29 compute-0 podman[203476]: time="2025-11-22T08:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:37:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:37:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 08:37:30 compute-0 nova_compute[189268]: 2025-11-22 08:37:30.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:31 compute-0 openstack_network_exporter[205661]: ERROR   08:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:37:31 compute-0 openstack_network_exporter[205661]: ERROR   08:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:37:31 compute-0 openstack_network_exporter[205661]: ERROR   08:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:37:31 compute-0 openstack_network_exporter[205661]: ERROR   08:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:37:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:37:31 compute-0 openstack_network_exporter[205661]: ERROR   08:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:37:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:37:31 compute-0 nova_compute[189268]: 2025-11-22 08:37:31.949 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:32 compute-0 nova_compute[189268]: 2025-11-22 08:37:32.283 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:33 compute-0 nova_compute[189268]: 2025-11-22 08:37:33.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:33 compute-0 nova_compute[189268]: 2025-11-22 08:37:33.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:37:34 compute-0 nova_compute[189268]: 2025-11-22 08:37:34.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:34 compute-0 nova_compute[189268]: 2025-11-22 08:37:34.102 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:35 compute-0 nova_compute[189268]: 2025-11-22 08:37:35.096 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:35 compute-0 podman[245254]: 2025-11-22 08:37:35.173472932 +0000 UTC m=+0.109755653 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, release=1755695350, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container)
Nov 22 08:37:36 compute-0 nova_compute[189268]: 2025-11-22 08:37:36.952 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:37 compute-0 nova_compute[189268]: 2025-11-22 08:37:37.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:37 compute-0 podman[245274]: 2025-11-22 08:37:37.144723843 +0000 UTC m=+0.098980471 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:37:37 compute-0 nova_compute[189268]: 2025-11-22 08:37:37.286 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:38 compute-0 nova_compute[189268]: 2025-11-22 08:37:38.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.125 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.217 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.295 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.298 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.359 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.362 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.433 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.434 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.527 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.538 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.619 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.620 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.686 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.687 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.752 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.753 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.853 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.861 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.941 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:40 compute-0 nova_compute[189268]: 2025-11-22 08:37:40.950 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.048 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.049 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.114 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.115 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.186 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.673 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.675 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4795MB free_disk=72.46046829223633GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.675 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.676 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.757 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.758 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.758 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.758 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.758 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.846 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.862 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.863 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.863 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:41 compute-0 nova_compute[189268]: 2025-11-22 08:37:41.954 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:42 compute-0 nova_compute[189268]: 2025-11-22 08:37:42.290 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:43 compute-0 nova_compute[189268]: 2025-11-22 08:37:43.863 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:37:46 compute-0 podman[245336]: 2025-11-22 08:37:46.14377188 +0000 UTC m=+0.094229714 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 22 08:37:46 compute-0 podman[245337]: 2025-11-22 08:37:46.150001988 +0000 UTC m=+0.092014533 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:37:46 compute-0 podman[245338]: 2025-11-22 08:37:46.157726118 +0000 UTC m=+0.097182184 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:37:46 compute-0 nova_compute[189268]: 2025-11-22 08:37:46.955 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:47 compute-0 nova_compute[189268]: 2025-11-22 08:37:47.293 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.559 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.559 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.560 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.560 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.561 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.562 189273 INFO nova.compute.manager [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Terminating instance
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.563 189273 DEBUG nova.compute.manager [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:37:51 compute-0 kernel: tap3a644b09-36 (unregistering): left promiscuous mode
Nov 22 08:37:51 compute-0 NetworkManager[56326]: <info>  [1763800671.6105] device (tap3a644b09-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:37:51 compute-0 ovn_controller[97783]: 2025-11-22T08:37:51Z|00061|binding|INFO|Releasing lport 3a644b09-361d-48d6-8efe-a180b1177788 from this chassis (sb_readonly=0)
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.621 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 ovn_controller[97783]: 2025-11-22T08:37:51Z|00062|binding|INFO|Setting lport 3a644b09-361d-48d6-8efe-a180b1177788 down in Southbound
Nov 22 08:37:51 compute-0 ovn_controller[97783]: 2025-11-22T08:37:51Z|00063|binding|INFO|Removing iface tap3a644b09-36 ovn-installed in OVS
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.625 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.636 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.638 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:9f:dc 192.168.0.192'], port_security=['fa:16:3e:7d:9f:dc 192.168.0.192'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-hea4zpteaolv-dnc7x4xkssdg-port-wswwvb7qczwb', 'neutron:cidrs': '192.168.0.192/24', 'neutron:device_id': 'cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-hea4zpteaolv-dnc7x4xkssdg-port-wswwvb7qczwb', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.207', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=3a644b09-361d-48d6-8efe-a180b1177788) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.639 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 3a644b09-361d-48d6-8efe-a180b1177788 in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 unbound from our chassis
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.640 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.660 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[797034dd-ccbd-4c28-8bb5-cd3df8de4e47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:37:51 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 22 08:37:51 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 17.146s CPU time.
Nov 22 08:37:51 compute-0 systemd-machined[155703]: Machine qemu-4-instance-00000004 terminated.
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.696 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[bb86c755-9f67-4495-b44c-06ad0941a745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.701 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[dec475f0-bd30-4982-926a-b2b87f496d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.738 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[56a122f3-34ae-4b07-afd9-df24e7a985e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.760 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[271851f7-108b-4720-a955-6c8a7952ec9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 17, 'rx_bytes': 532, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 17, 'rx_bytes': 532, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 39670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245409, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.780 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3a33b775-85d4-41c1-9a0f-8c229e40e094]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245410, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245410, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.783 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.786 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.793 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.794 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.794 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.795 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.795 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.849 189273 DEBUG nova.compute.manager [req-880a43de-74e8-47b4-8343-c1546d4b86d3 req-ec2ed4cd-0060-487d-b59b-2bda3cba845a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received event network-vif-unplugged-3a644b09-361d-48d6-8efe-a180b1177788 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.850 189273 DEBUG oslo_concurrency.lockutils [req-880a43de-74e8-47b4-8343-c1546d4b86d3 req-ec2ed4cd-0060-487d-b59b-2bda3cba845a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.850 189273 DEBUG oslo_concurrency.lockutils [req-880a43de-74e8-47b4-8343-c1546d4b86d3 req-ec2ed4cd-0060-487d-b59b-2bda3cba845a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.850 189273 DEBUG oslo_concurrency.lockutils [req-880a43de-74e8-47b4-8343-c1546d4b86d3 req-ec2ed4cd-0060-487d-b59b-2bda3cba845a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.850 189273 DEBUG nova.compute.manager [req-880a43de-74e8-47b4-8343-c1546d4b86d3 req-ec2ed4cd-0060-487d-b59b-2bda3cba845a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] No waiting events found dispatching network-vif-unplugged-3a644b09-361d-48d6-8efe-a180b1177788 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.851 189273 DEBUG nova.compute.manager [req-880a43de-74e8-47b4-8343-c1546d4b86d3 req-ec2ed4cd-0060-487d-b59b-2bda3cba845a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received event network-vif-unplugged-3a644b09-361d-48d6-8efe-a180b1177788 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.887 189273 INFO nova.virt.libvirt.driver [-] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Instance destroyed successfully.
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.888 189273 DEBUG nova.objects.instance [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'resources' on Instance uuid cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.905 189273 DEBUG nova.virt.libvirt.vif [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:31:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-hea4zpteaolv-dnc7x4xkssdg-vnf-savd4bbetntp',id=4,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:32:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-ju3bsu4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:32:06Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDE1NzkxOTcyMTIyMTUzNTk1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 22 08:37:51 compute-0 nova_compute[189268]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDE1NzkxOTcyMTIyMTUzNTk1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQxNTc5MTk3MjEyMjE1MzU5NTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00MTU3OTE5NzIxMjIxNTM1OTU4PT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.905 189273 DEBUG nova.network.os_vif_util [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "3a644b09-361d-48d6-8efe-a180b1177788", "address": "fa:16:3e:7d:9f:dc", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a644b09-36", "ovs_interfaceid": "3a644b09-361d-48d6-8efe-a180b1177788", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.906 189273 DEBUG nova.network.os_vif_util [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:9f:dc,bridge_name='br-int',has_traffic_filtering=True,id=3a644b09-361d-48d6-8efe-a180b1177788,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a644b09-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.906 189273 DEBUG os_vif [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:9f:dc,bridge_name='br-int',has_traffic_filtering=True,id=3a644b09-361d-48d6-8efe-a180b1177788,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a644b09-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.908 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.909 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a644b09-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.911 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.913 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.918 189273 INFO os_vif [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:9f:dc,bridge_name='br-int',has_traffic_filtering=True,id=3a644b09-361d-48d6-8efe-a180b1177788,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a644b09-36')
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.919 189273 INFO nova.virt.libvirt.driver [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Deleting instance files /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435_del
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.920 189273 INFO nova.virt.libvirt.driver [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Deletion of /var/lib/nova/instances/cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435_del complete
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.958 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.972 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.972 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:51.973 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.985 189273 INFO nova.compute.manager [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Took 0.42 seconds to destroy the instance on the hypervisor.
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.986 189273 DEBUG oslo.service.loopingcall [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.986 189273 DEBUG nova.compute.manager [-] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:37:51 compute-0 nova_compute[189268]: 2025-11-22 08:37:51.986 189273 DEBUG nova.network.neutron [-] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:37:52 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:37:51.905 189273 DEBUG nova.virt.libvirt.vif [None req-fd652550-b5 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:37:52 compute-0 nova_compute[189268]: 2025-11-22 08:37:52.947 189273 DEBUG nova.compute.manager [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received event network-changed-3a644b09-361d-48d6-8efe-a180b1177788 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:37:52 compute-0 nova_compute[189268]: 2025-11-22 08:37:52.948 189273 DEBUG nova.compute.manager [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Refreshing instance network info cache due to event network-changed-3a644b09-361d-48d6-8efe-a180b1177788. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:37:52 compute-0 nova_compute[189268]: 2025-11-22 08:37:52.948 189273 DEBUG oslo_concurrency.lockutils [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:37:52 compute-0 nova_compute[189268]: 2025-11-22 08:37:52.948 189273 DEBUG oslo_concurrency.lockutils [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:37:52 compute-0 nova_compute[189268]: 2025-11-22 08:37:52.949 189273 DEBUG nova.network.neutron [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Refreshing network info cache for port 3a644b09-361d-48d6-8efe-a180b1177788 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.105 189273 INFO nova.network.neutron [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Port 3a644b09-361d-48d6-8efe-a180b1177788 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.106 189273 DEBUG nova.network.neutron [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.138 189273 DEBUG oslo_concurrency.lockutils [req-a759e7a3-055a-4118-a4f8-a71379b49666 req-470c70f5-a3ae-44cd-9e9a-d221f18bf6e0 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.274 189273 DEBUG nova.network.neutron [-] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.294 189273 INFO nova.compute.manager [-] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Took 1.31 seconds to deallocate network for instance.
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.334 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.335 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.437 189273 DEBUG nova.compute.provider_tree [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.448 189273 DEBUG nova.scheduler.client.report [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.467 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.487 189273 INFO nova.scheduler.client.report [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Deleted allocations for instance cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.554 189273 DEBUG oslo_concurrency.lockutils [None req-fd652550-b515-4060-9452-bd0ae65afebe 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.919 189273 DEBUG nova.compute.manager [req-bbd4bd1e-0ed5-468e-a292-4e8500234227 req-2bd1f9c6-f3b3-405b-8c27-c3bd329c9a74 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received event network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.919 189273 DEBUG oslo_concurrency.lockutils [req-bbd4bd1e-0ed5-468e-a292-4e8500234227 req-2bd1f9c6-f3b3-405b-8c27-c3bd329c9a74 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.920 189273 DEBUG oslo_concurrency.lockutils [req-bbd4bd1e-0ed5-468e-a292-4e8500234227 req-2bd1f9c6-f3b3-405b-8c27-c3bd329c9a74 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.921 189273 DEBUG oslo_concurrency.lockutils [req-bbd4bd1e-0ed5-468e-a292-4e8500234227 req-2bd1f9c6-f3b3-405b-8c27-c3bd329c9a74 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.921 189273 DEBUG nova.compute.manager [req-bbd4bd1e-0ed5-468e-a292-4e8500234227 req-2bd1f9c6-f3b3-405b-8c27-c3bd329c9a74 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] No waiting events found dispatching network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:37:53 compute-0 nova_compute[189268]: 2025-11-22 08:37:53.921 189273 WARNING nova.compute.manager [req-bbd4bd1e-0ed5-468e-a292-4e8500234227 req-2bd1f9c6-f3b3-405b-8c27-c3bd329c9a74 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Received unexpected event network-vif-plugged-3a644b09-361d-48d6-8efe-a180b1177788 for instance with vm_state deleted and task_state None.
Nov 22 08:37:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:37:54.976 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:37:55 compute-0 podman[245432]: 2025-11-22 08:37:55.147206205 +0000 UTC m=+0.083809311 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:37:55 compute-0 podman[245433]: 2025-11-22 08:37:55.154858972 +0000 UTC m=+0.091089638 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 22 08:37:56 compute-0 nova_compute[189268]: 2025-11-22 08:37:56.912 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:56 compute-0 nova_compute[189268]: 2025-11-22 08:37:56.960 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:37:59 compute-0 podman[203476]: time="2025-11-22T08:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:37:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:37:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 22 08:38:00 compute-0 podman[245471]: 2025-11-22 08:38:00.135080646 +0000 UTC m=+0.087167721 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, name=ubi9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0)
Nov 22 08:38:00 compute-0 podman[245472]: 2025-11-22 08:38:00.1857983 +0000 UTC m=+0.131852622 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:38:01 compute-0 openstack_network_exporter[205661]: ERROR   08:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:38:01 compute-0 openstack_network_exporter[205661]: ERROR   08:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:38:01 compute-0 openstack_network_exporter[205661]: ERROR   08:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:38:01 compute-0 openstack_network_exporter[205661]: ERROR   08:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:38:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:38:01 compute-0 openstack_network_exporter[205661]: ERROR   08:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:38:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:38:01 compute-0 nova_compute[189268]: 2025-11-22 08:38:01.915 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:01 compute-0 nova_compute[189268]: 2025-11-22 08:38:01.963 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:06 compute-0 podman[245514]: 2025-11-22 08:38:06.178183494 +0000 UTC m=+0.112578290 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Nov 22 08:38:06 compute-0 nova_compute[189268]: 2025-11-22 08:38:06.883 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763800671.8812869, cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:38:06 compute-0 nova_compute[189268]: 2025-11-22 08:38:06.884 189273 INFO nova.compute.manager [-] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] VM Stopped (Lifecycle Event)
Nov 22 08:38:06 compute-0 nova_compute[189268]: 2025-11-22 08:38:06.917 189273 DEBUG nova.compute.manager [None req-e89e25d9-f97a-4997-a471-3506271fb9a3 - - - - - -] [instance: cb2042e7-d9d4-4e57-a56a-5cb2fd3e5435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:38:06 compute-0 nova_compute[189268]: 2025-11-22 08:38:06.918 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:06 compute-0 nova_compute[189268]: 2025-11-22 08:38:06.967 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:08 compute-0 podman[245533]: 2025-11-22 08:38:08.110686746 +0000 UTC m=+0.066058440 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:38:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:38:09.975 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:38:09.976 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:38:09.977 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:11 compute-0 nova_compute[189268]: 2025-11-22 08:38:11.921 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:11 compute-0 nova_compute[189268]: 2025-11-22 08:38:11.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:13 compute-0 sshd-session[245557]: Accepted publickey for zuul from 38.129.56.128 port 35030 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 08:38:13 compute-0 systemd-logind[826]: New session 30 of user zuul.
Nov 22 08:38:13 compute-0 systemd[1]: Started Session 30 of User zuul.
Nov 22 08:38:13 compute-0 sshd-session[245557]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:38:14 compute-0 sudo[245734]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkyycqrcfuzvyqnhmkmkyplzilftsjom ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763800693.5377464-58802-211131127746728/AnsiballZ_command.py'
Nov 22 08:38:14 compute-0 sudo[245734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:38:14 compute-0 python3[245736]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:38:14 compute-0 sudo[245734]: pam_unix(sudo:session): session closed for user root
Nov 22 08:38:16 compute-0 nova_compute[189268]: 2025-11-22 08:38:16.924 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:16 compute-0 nova_compute[189268]: 2025-11-22 08:38:16.972 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:17 compute-0 podman[245777]: 2025-11-22 08:38:17.169765923 +0000 UTC m=+0.098857588 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:38:17 compute-0 podman[245778]: 2025-11-22 08:38:17.17485566 +0000 UTC m=+0.107464800 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 08:38:17 compute-0 podman[245776]: 2025-11-22 08:38:17.191778458 +0000 UTC m=+0.133004282 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 08:38:21 compute-0 nova_compute[189268]: 2025-11-22 08:38:21.929 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:21 compute-0 nova_compute[189268]: 2025-11-22 08:38:21.975 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.092 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.093 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.104 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.110 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '64e4ab2b-2a08-4c3c-9561-94454cb0b482', 'name': 'vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.111 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.111 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.111 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.111 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:38:22.111661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.120 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 2472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.128 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.129 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.129 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.129 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.130 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.130 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.130 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.130 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.131 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.132 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.132 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.133 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.133 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:38:22.130391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.133 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.133 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:38:22.133665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.134 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.134 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.135 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.135 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.136 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.136 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.136 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.136 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.137 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.137 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.138 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.138 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.138 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.139 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:38:22.136337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:38:22.139105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.183 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 45360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.212 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/cpu volume: 39070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.213 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.214 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.214 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.214 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.214 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.215 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:38:22.214564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.215 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.216 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.217 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.217 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:38:22.217227) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.244 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.245 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.245 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.271 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.272 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.272 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.272 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.272 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.273 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.273 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.273 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.273 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:38:22.273502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.366 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.367 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.367 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.457 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.458 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.458 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.458 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.459 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.459 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.459 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.459 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.459 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.459 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.460 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:38:22.459372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.460 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.460 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.461 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.461 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.461 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.462 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.462 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.462 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.462 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.462 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.463 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:38:22.462400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.463 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.463 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.463 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.463 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.464 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.464 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.464 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.464 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.464 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:38:22.464366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.465 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.465 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.465 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.465 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 1133591681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.465 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 382437315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.466 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 288491761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.466 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.466 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.466 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.467 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.467 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.467 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:38:22.467251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.467 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.468 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.468 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.468 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.468 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.468 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.468 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.469 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:38:22.468901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.469 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.469 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.469 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.470 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.470 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.470 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.471 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.471 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.471 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.471 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.471 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.471 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.472 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:38:22.471626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.472 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.472 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.472 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.473 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.473 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.473 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.473 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.473 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.474 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.474 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.474 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.474 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.475 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:38:22.474217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.475 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.475 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.475 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.476 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.476 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.476 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.476 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.476 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.476 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.477 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.477 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:38:22.476891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.477 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.477 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.478 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.478 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.478 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.479 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.479 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.479 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.479 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.479 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.479 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:38:22.479575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.480 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.481 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:38:22.480818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.481 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.481 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.481 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 57392898403 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.482 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 229562299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.482 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.482 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.482 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.482 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.482 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.482 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.483 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:38:22.482920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.483 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.483 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.483 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.483 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.484 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.484 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.484 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:38:22.484070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.484 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.485 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.485 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.485 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.485 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.485 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.485 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:38:22.485550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.486 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.486 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.486 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.486 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.486 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.486 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:38:22.486654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.487 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.488 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:38:22.487882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.488 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.488 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.488 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:38:22.489343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.489 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.490 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.490 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.490 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.491 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:38:22.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:38:25 compute-0 ovn_controller[97783]: 2025-11-22T08:38:25Z|00064|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 22 08:38:26 compute-0 podman[245840]: 2025-11-22 08:38:26.131702007 +0000 UTC m=+0.075582268 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 08:38:26 compute-0 podman[245839]: 2025-11-22 08:38:26.158623487 +0000 UTC m=+0.107668587 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 22 08:38:26 compute-0 nova_compute[189268]: 2025-11-22 08:38:26.932 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:26 compute-0 nova_compute[189268]: 2025-11-22 08:38:26.977 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:29 compute-0 nova_compute[189268]: 2025-11-22 08:38:29.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:29 compute-0 nova_compute[189268]: 2025-11-22 08:38:29.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:38:29 compute-0 nova_compute[189268]: 2025-11-22 08:38:29.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:38:29 compute-0 nova_compute[189268]: 2025-11-22 08:38:29.283 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:38:29 compute-0 nova_compute[189268]: 2025-11-22 08:38:29.283 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:38:29 compute-0 nova_compute[189268]: 2025-11-22 08:38:29.284 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:38:29 compute-0 nova_compute[189268]: 2025-11-22 08:38:29.284 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:38:29 compute-0 podman[203476]: time="2025-11-22T08:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:38:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:38:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.020 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.020 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.036 189273 DEBUG nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.114 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.115 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.128 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.129 189273 INFO nova.compute.claims [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.259 189273 DEBUG nova.compute.provider_tree [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.280 189273 DEBUG nova.scheduler.client.report [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.298 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.302 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.303 189273 DEBUG nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.339 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.339 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.339 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.362 189273 DEBUG nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.378 189273 INFO nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.405 189273 DEBUG nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.487 189273 DEBUG nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.488 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.489 189273 INFO nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Creating image(s)
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.489 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.490 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.490 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.491 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "1d7f8e073419c499459afad86b152b7fec19c8da" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:30 compute-0 nova_compute[189268]: 2025-11-22 08:38:30.491 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "1d7f8e073419c499459afad86b152b7fec19c8da" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:31 compute-0 podman[245879]: 2025-11-22 08:38:31.165919875 +0000 UTC m=+0.104327986 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.tags=base rhel9)
Nov 22 08:38:31 compute-0 podman[245880]: 2025-11-22 08:38:31.236371563 +0000 UTC m=+0.159627544 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:38:31 compute-0 openstack_network_exporter[205661]: ERROR   08:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:38:31 compute-0 openstack_network_exporter[205661]: ERROR   08:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:38:31 compute-0 openstack_network_exporter[205661]: ERROR   08:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:38:31 compute-0 openstack_network_exporter[205661]: ERROR   08:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:38:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:38:31 compute-0 openstack_network_exporter[205661]: ERROR   08:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:38:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.591 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.654 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.655 189273 DEBUG nova.virt.images [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] b196ed1b-3c5a-4e95-b465-c850e4e858a7 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.658 189273 DEBUG nova.privsep.utils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.659 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.part /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.879 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.part /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.converted" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.884 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.935 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.963 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.965 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "1d7f8e073419c499459afad86b152b7fec19c8da" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:31 compute-0 nova_compute[189268]: 2025-11-22 08:38:31.980 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.002 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.062 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.063 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "1d7f8e073419c499459afad86b152b7fec19c8da" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.064 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "1d7f8e073419c499459afad86b152b7fec19c8da" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.078 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.138 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.139 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da,backing_fmt=raw /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.184 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da,backing_fmt=raw /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.185 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "1d7f8e073419c499459afad86b152b7fec19c8da" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.185 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.247 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.248 189273 DEBUG nova.virt.disk.api [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Checking if we can resize image /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.249 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.311 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.312 189273 DEBUG nova.virt.disk.api [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Cannot resize image /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.312 189273 DEBUG nova.objects.instance [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'migration_context' on Instance uuid e8c865a7-b309-4ee1-9843-bb58fc1c64b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.326 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.327 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.329 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.355 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.420 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.426 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.427 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.437 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.503 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.504 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.556 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.eph0 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.558 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.558 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.637 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.638 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.638 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Ensure instance console log exists: /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.639 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.639 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.640 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.642 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:38:18Z,direct_url=<?>,disk_format='qcow2',id=b196ed1b-3c5a-4e95-b465-c850e4e858a7,min_disk=0,min_ram=0,name='fvt_testing_image',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:38:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'b196ed1b-3c5a-4e95-b465-c850e4e858a7'}], 'ephemerals': [{'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1, 'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.650 189273 WARNING nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.658 189273 DEBUG nova.virt.libvirt.host [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.659 189273 DEBUG nova.virt.libvirt.host [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.663 189273 DEBUG nova.virt.libvirt.host [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.664 189273 DEBUG nova.virt.libvirt.host [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.664 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.664 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:38:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='1e0d3c28-e855-410c-8bf5-88d3ab84c578',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-22T08:38:18Z,direct_url=<?>,disk_format='qcow2',id=b196ed1b-3c5a-4e95-b465-c850e4e858a7,min_disk=0,min_ram=0,name='fvt_testing_image',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-22T08:38:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.665 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.665 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.665 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.665 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.666 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.666 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.666 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.666 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.666 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.666 189273 DEBUG nova.virt.hardware [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.671 189273 DEBUG nova.objects.instance [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8c865a7-b309-4ee1-9843-bb58fc1c64b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.685 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <uuid>e8c865a7-b309-4ee1-9843-bb58fc1c64b9</uuid>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <name>instance-00000006</name>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <memory>524288</memory>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <nova:name>fvt_testing_server</nova:name>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:38:32</nova:creationTime>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <nova:flavor name="fvt_testing_flavor">
Nov 22 08:38:32 compute-0 nova_compute[189268]:         <nova:memory>512</nova:memory>
Nov 22 08:38:32 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:38:32 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:38:32 compute-0 nova_compute[189268]:         <nova:ephemeral>1</nova:ephemeral>
Nov 22 08:38:32 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:38:32 compute-0 nova_compute[189268]:         <nova:user uuid="27ed1dd009ad4e29863ab5e3a9826c94">admin</nova:user>
Nov 22 08:38:32 compute-0 nova_compute[189268]:         <nova:project uuid="80e46844b3824928a6138235e5ede512">admin</nova:project>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="b196ed1b-3c5a-4e95-b465-c850e4e858a7"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <nova:ports/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <system>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <entry name="serial">e8c865a7-b309-4ee1-9843-bb58fc1c64b9</entry>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <entry name="uuid">e8c865a7-b309-4ee1-9843-bb58fc1c64b9</entry>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </system>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <os>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   </os>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <features>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   </features>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.eph0"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <target dev="vdb" bus="virtio"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.config"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/console.log" append="off"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <video>
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </video>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:38:32 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:38:32 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:38:32 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:38:32 compute-0 nova_compute[189268]: </domain>
Nov 22 08:38:32 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.754 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.754 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.755 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:38:32 compute-0 nova_compute[189268]: 2025-11-22 08:38:32.755 189273 INFO nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Using config drive
Nov 22 08:38:33 compute-0 nova_compute[189268]: 2025-11-22 08:38:33.026 189273 INFO nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Creating config drive at /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.config
Nov 22 08:38:33 compute-0 nova_compute[189268]: 2025-11-22 08:38:33.031 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3chcusq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:33 compute-0 nova_compute[189268]: 2025-11-22 08:38:33.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:33 compute-0 nova_compute[189268]: 2025-11-22 08:38:33.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:38:33 compute-0 nova_compute[189268]: 2025-11-22 08:38:33.161 189273 DEBUG oslo_concurrency.processutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3chcusq" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:33 compute-0 systemd-machined[155703]: New machine qemu-6-instance-00000006.
Nov 22 08:38:33 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.096 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800714.0954218, e8c865a7-b309-4ee1-9843-bb58fc1c64b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.096 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] VM Resumed (Lifecycle Event)
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.101 189273 DEBUG nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.102 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.108 189273 INFO nova.virt.libvirt.driver [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Instance spawned successfully.
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.109 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.121 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.131 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.137 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.138 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.139 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.139 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.140 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.140 189273 DEBUG nova.virt.libvirt.driver [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.154 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.155 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763800714.0992143, e8c865a7-b309-4ee1-9843-bb58fc1c64b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.156 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] VM Started (Lifecycle Event)
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.179 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.186 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.190 189273 INFO nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Took 3.70 seconds to spawn the instance on the hypervisor.
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.190 189273 DEBUG nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.201 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.243 189273 INFO nova.compute.manager [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Took 4.16 seconds to build instance.
Nov 22 08:38:34 compute-0 nova_compute[189268]: 2025-11-22 08:38:34.258 189273 DEBUG oslo_concurrency.lockutils [None req-00c5d032-373c-47c2-9ff1-0a789cabfc95 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:35 compute-0 nova_compute[189268]: 2025-11-22 08:38:35.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:35 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 08:38:35 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 08:38:36 compute-0 nova_compute[189268]: 2025-11-22 08:38:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:36 compute-0 nova_compute[189268]: 2025-11-22 08:38:36.938 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:36 compute-0 nova_compute[189268]: 2025-11-22 08:38:36.984 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:37 compute-0 nova_compute[189268]: 2025-11-22 08:38:37.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:37 compute-0 podman[246008]: 2025-11-22 08:38:37.15543036 +0000 UTC m=+0.109720143 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Nov 22 08:38:39 compute-0 podman[246029]: 2025-11-22 08:38:39.117603785 +0000 UTC m=+0.069313557 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.119 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.120 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.120 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.121 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.229 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.304 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.306 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.392 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.393 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.463 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.464 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.553 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.562 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.643 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.644 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.711 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.712 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.777 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.778 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.860 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.870 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.933 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.935 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.996 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:40 compute-0 nova_compute[189268]: 2025-11-22 08:38:40.998 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.059 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.060 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.122 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.476 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.478 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4842MB free_disk=72.45452499389648GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.478 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.479 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.567 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.567 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.567 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance e8c865a7-b309-4ee1-9843-bb58fc1c64b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.568 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.568 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.657 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.670 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.693 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.694 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.941 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:41 compute-0 nova_compute[189268]: 2025-11-22 08:38:41.986 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:45 compute-0 nova_compute[189268]: 2025-11-22 08:38:45.695 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.649 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.649 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.650 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.650 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.650 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.652 189273 INFO nova.compute.manager [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Terminating instance
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.653 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "refresh_cache-e8c865a7-b309-4ee1-9843-bb58fc1c64b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.653 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquired lock "refresh_cache-e8c865a7-b309-4ee1-9843-bb58fc1c64b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.653 189273 DEBUG nova.network.neutron [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.812 189273 DEBUG nova.network.neutron [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.943 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:46 compute-0 nova_compute[189268]: 2025-11-22 08:38:46.988 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.177 189273 DEBUG nova.network.neutron [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.198 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Releasing lock "refresh_cache-e8c865a7-b309-4ee1-9843-bb58fc1c64b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.198 189273 DEBUG nova.compute.manager [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:38:47 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 22 08:38:47 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 14.134s CPU time.
Nov 22 08:38:47 compute-0 systemd-machined[155703]: Machine qemu-6-instance-00000006 terminated.
Nov 22 08:38:47 compute-0 podman[246095]: 2025-11-22 08:38:47.340331704 +0000 UTC m=+0.070523321 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:38:47 compute-0 podman[246094]: 2025-11-22 08:38:47.340625682 +0000 UTC m=+0.075279870 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:38:47 compute-0 podman[246093]: 2025-11-22 08:38:47.376324899 +0000 UTC m=+0.110035901 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.482 189273 INFO nova.virt.libvirt.driver [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Instance destroyed successfully.
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.483 189273 DEBUG nova.objects.instance [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'resources' on Instance uuid e8c865a7-b309-4ee1-9843-bb58fc1c64b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.494 189273 INFO nova.virt.libvirt.driver [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Deleting instance files /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9_del
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.495 189273 INFO nova.virt.libvirt.driver [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Deletion of /var/lib/nova/instances/e8c865a7-b309-4ee1-9843-bb58fc1c64b9_del complete
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.594 189273 INFO nova.compute.manager [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Took 0.40 seconds to destroy the instance on the hypervisor.
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.595 189273 DEBUG oslo.service.loopingcall [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.595 189273 DEBUG nova.compute.manager [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:38:47 compute-0 nova_compute[189268]: 2025-11-22 08:38:47.596 189273 DEBUG nova.network.neutron [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.025 189273 DEBUG nova.network.neutron [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.038 189273 DEBUG nova.network.neutron [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.050 189273 INFO nova.compute.manager [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Took 0.45 seconds to deallocate network for instance.
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.121 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.122 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.224 189273 DEBUG nova.compute.provider_tree [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.244 189273 DEBUG nova.scheduler.client.report [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.280 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.521 189273 INFO nova.scheduler.client.report [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Deleted allocations for instance e8c865a7-b309-4ee1-9843-bb58fc1c64b9
Nov 22 08:38:48 compute-0 nova_compute[189268]: 2025-11-22 08:38:48.601 189273 DEBUG oslo_concurrency.lockutils [None req-249f8742-c0f7-42ba-a31a-df8127a15490 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "e8c865a7-b309-4ee1-9843-bb58fc1c64b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:38:51 compute-0 nova_compute[189268]: 2025-11-22 08:38:51.947 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:51 compute-0 nova_compute[189268]: 2025-11-22 08:38:51.991 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:54 compute-0 sshd-session[246054]: Invalid user NL5xUDpV2xRa from 195.88.120.62 port 37032
Nov 22 08:38:54 compute-0 sshd-session[246054]: fatal: userauth_pubkey: parse publickey packet: incomplete message [preauth]
Nov 22 08:38:56 compute-0 nova_compute[189268]: 2025-11-22 08:38:56.950 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:56 compute-0 nova_compute[189268]: 2025-11-22 08:38:56.993 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:38:57 compute-0 podman[246168]: 2025-11-22 08:38:57.13816916 +0000 UTC m=+0.087507621 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:38:57 compute-0 podman[246167]: 2025-11-22 08:38:57.164855563 +0000 UTC m=+0.119462257 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 22 08:38:59 compute-0 podman[203476]: time="2025-11-22T08:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:38:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:38:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 22 08:39:01 compute-0 openstack_network_exporter[205661]: ERROR   08:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:39:01 compute-0 openstack_network_exporter[205661]: ERROR   08:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:39:01 compute-0 openstack_network_exporter[205661]: ERROR   08:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:39:01 compute-0 openstack_network_exporter[205661]: ERROR   08:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:39:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:39:01 compute-0 openstack_network_exporter[205661]: ERROR   08:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:39:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:39:01 compute-0 nova_compute[189268]: 2025-11-22 08:39:01.953 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:01 compute-0 nova_compute[189268]: 2025-11-22 08:39:01.995 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:02 compute-0 podman[246204]: 2025-11-22 08:39:02.131680965 +0000 UTC m=+0.083209484 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9)
Nov 22 08:39:02 compute-0 podman[246205]: 2025-11-22 08:39:02.165153402 +0000 UTC m=+0.109138977 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:39:02 compute-0 nova_compute[189268]: 2025-11-22 08:39:02.479 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763800727.4785793, e8c865a7-b309-4ee1-9843-bb58fc1c64b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:39:02 compute-0 nova_compute[189268]: 2025-11-22 08:39:02.480 189273 INFO nova.compute.manager [-] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] VM Stopped (Lifecycle Event)
Nov 22 08:39:03 compute-0 nova_compute[189268]: 2025-11-22 08:39:03.220 189273 DEBUG nova.compute.manager [None req-217d02d9-64bf-4109-9213-e4f7051d6428 - - - - - -] [instance: e8c865a7-b309-4ee1-9843-bb58fc1c64b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:39:06 compute-0 nova_compute[189268]: 2025-11-22 08:39:06.957 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:06 compute-0 nova_compute[189268]: 2025-11-22 08:39:06.998 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:08 compute-0 podman[246248]: 2025-11-22 08:39:08.185814905 +0000 UTC m=+0.130529656 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 22 08:39:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:39:09.979 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:39:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:39:09.980 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:39:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:39:09.982 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:39:10 compute-0 podman[246270]: 2025-11-22 08:39:10.168257631 +0000 UTC m=+0.116219758 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:39:11 compute-0 nova_compute[189268]: 2025-11-22 08:39:11.962 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:12 compute-0 nova_compute[189268]: 2025-11-22 08:39:12.000 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:14 compute-0 sshd-session[245560]: Received disconnect from 38.129.56.128 port 35030:11: disconnected by user
Nov 22 08:39:14 compute-0 sshd-session[245560]: Disconnected from user zuul 38.129.56.128 port 35030
Nov 22 08:39:14 compute-0 sshd-session[245557]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:39:14 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Nov 22 08:39:14 compute-0 systemd[1]: session-30.scope: Consumed 1.033s CPU time.
Nov 22 08:39:14 compute-0 systemd-logind[826]: Session 30 logged out. Waiting for processes to exit.
Nov 22 08:39:14 compute-0 systemd-logind[826]: Removed session 30.
Nov 22 08:39:16 compute-0 nova_compute[189268]: 2025-11-22 08:39:16.966 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:17 compute-0 nova_compute[189268]: 2025-11-22 08:39:17.002 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:18 compute-0 podman[246295]: 2025-11-22 08:39:18.141213349 +0000 UTC m=+0.093207465 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:39:18 compute-0 podman[246296]: 2025-11-22 08:39:18.160220764 +0000 UTC m=+0.105791526 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:39:18 compute-0 podman[246294]: 2025-11-22 08:39:18.160954743 +0000 UTC m=+0.110361360 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 08:39:21 compute-0 nova_compute[189268]: 2025-11-22 08:39:21.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:22 compute-0 nova_compute[189268]: 2025-11-22 08:39:22.006 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:24 compute-0 sshd-session[246355]: Accepted publickey for zuul from 38.129.56.128 port 48070 ssh2: RSA SHA256:g1zSa//+/mxUXmf2M16Bba4a7+RLV+1PmLKCUOr+UqA
Nov 22 08:39:24 compute-0 systemd-logind[826]: New session 31 of user zuul.
Nov 22 08:39:24 compute-0 systemd[1]: Started Session 31 of User zuul.
Nov 22 08:39:24 compute-0 sshd-session[246355]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 08:39:25 compute-0 sudo[246532]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unrkpntlstpqamboknbwwdesowchblli ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763800764.4861934-59556-159195126846046/AnsiballZ_command.py'
Nov 22 08:39:25 compute-0 sudo[246532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:39:25 compute-0 python3[246534]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:39:25 compute-0 sudo[246532]: pam_unix(sudo:session): session closed for user root
Nov 22 08:39:26 compute-0 nova_compute[189268]: 2025-11-22 08:39:26.972 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:27 compute-0 nova_compute[189268]: 2025-11-22 08:39:27.008 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:28 compute-0 podman[246575]: 2025-11-22 08:39:28.125788068 +0000 UTC m=+0.076262557 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:39:28 compute-0 podman[246574]: 2025-11-22 08:39:28.126271992 +0000 UTC m=+0.080738549 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251118)
Nov 22 08:39:29 compute-0 podman[203476]: time="2025-11-22T08:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:39:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:39:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 22 08:39:30 compute-0 nova_compute[189268]: 2025-11-22 08:39:30.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:30 compute-0 nova_compute[189268]: 2025-11-22 08:39:30.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:39:31 compute-0 nova_compute[189268]: 2025-11-22 08:39:31.043 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:39:31 compute-0 nova_compute[189268]: 2025-11-22 08:39:31.043 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:39:31 compute-0 nova_compute[189268]: 2025-11-22 08:39:31.044 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:39:31 compute-0 openstack_network_exporter[205661]: ERROR   08:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:39:31 compute-0 openstack_network_exporter[205661]: ERROR   08:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:39:31 compute-0 openstack_network_exporter[205661]: ERROR   08:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:39:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:39:31 compute-0 openstack_network_exporter[205661]: ERROR   08:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:39:31 compute-0 openstack_network_exporter[205661]: ERROR   08:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:39:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:39:31 compute-0 nova_compute[189268]: 2025-11-22 08:39:31.977 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:32 compute-0 nova_compute[189268]: 2025-11-22 08:39:32.011 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:32 compute-0 nova_compute[189268]: 2025-11-22 08:39:32.499 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updating instance_info_cache with network_info: [{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:39:32 compute-0 nova_compute[189268]: 2025-11-22 08:39:32.515 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:39:32 compute-0 nova_compute[189268]: 2025-11-22 08:39:32.515 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:39:32 compute-0 nova_compute[189268]: 2025-11-22 08:39:32.516 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:33 compute-0 nova_compute[189268]: 2025-11-22 08:39:33.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:33 compute-0 nova_compute[189268]: 2025-11-22 08:39:33.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:39:33 compute-0 podman[246611]: 2025-11-22 08:39:33.167755152 +0000 UTC m=+0.106424910 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:39:33 compute-0 podman[246610]: 2025-11-22 08:39:33.181296678 +0000 UTC m=+0.117545821 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release-0.7.12=)
Nov 22 08:39:34 compute-0 sudo[246825]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcecjglnuukipmvpbjtqdctposplwzge ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763800773.9557657-59720-173113883605339/AnsiballZ_command.py'
Nov 22 08:39:34 compute-0 sudo[246825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:39:34 compute-0 python3[246827]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:39:34 compute-0 sudo[246825]: pam_unix(sudo:session): session closed for user root
Nov 22 08:39:35 compute-0 nova_compute[189268]: 2025-11-22 08:39:35.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:36 compute-0 nova_compute[189268]: 2025-11-22 08:39:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:36 compute-0 nova_compute[189268]: 2025-11-22 08:39:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:36 compute-0 nova_compute[189268]: 2025-11-22 08:39:36.981 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:37 compute-0 nova_compute[189268]: 2025-11-22 08:39:37.014 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:39 compute-0 nova_compute[189268]: 2025-11-22 08:39:39.096 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:39 compute-0 nova_compute[189268]: 2025-11-22 08:39:39.128 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:39 compute-0 podman[246865]: 2025-11-22 08:39:39.136765793 +0000 UTC m=+0.085901468 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:39:41 compute-0 podman[246885]: 2025-11-22 08:39:41.122529816 +0000 UTC m=+0.070134552 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:39:41 compute-0 nova_compute[189268]: 2025-11-22 08:39:41.983 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.016 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.143 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.143 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.144 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.144 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.232 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.332 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.333 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.396 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.398 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.462 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.464 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.529 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.539 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.610 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.612 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.695 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.697 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.765 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.767 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:39:42 compute-0 nova_compute[189268]: 2025-11-22 08:39:42.839 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.204 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.205 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=72.4552001953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.205 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.206 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.282 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.282 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.282 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.282 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.343 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.359 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.379 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:39:43 compute-0 nova_compute[189268]: 2025-11-22 08:39:43.380 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:39:44 compute-0 sudo[247106]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doeunuhqufwzimjlybeqfbwzaumpilvi ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763800784.0004783-59875-106688944867970/AnsiballZ_command.py'
Nov 22 08:39:44 compute-0 sudo[247106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:39:44 compute-0 python3[247108]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:39:44 compute-0 sudo[247106]: pam_unix(sudo:session): session closed for user root
Nov 22 08:39:46 compute-0 nova_compute[189268]: 2025-11-22 08:39:46.987 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:47 compute-0 nova_compute[189268]: 2025-11-22 08:39:47.019 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:47 compute-0 nova_compute[189268]: 2025-11-22 08:39:47.380 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:39:49 compute-0 podman[247149]: 2025-11-22 08:39:49.134953648 +0000 UTC m=+0.071757686 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:39:49 compute-0 podman[247148]: 2025-11-22 08:39:49.139602764 +0000 UTC m=+0.081002955 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:39:49 compute-0 podman[247150]: 2025-11-22 08:39:49.147317922 +0000 UTC m=+0.078728974 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:39:51 compute-0 nova_compute[189268]: 2025-11-22 08:39:51.991 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:52 compute-0 nova_compute[189268]: 2025-11-22 08:39:52.021 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:56 compute-0 nova_compute[189268]: 2025-11-22 08:39:56.995 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:57 compute-0 nova_compute[189268]: 2025-11-22 08:39:57.023 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:39:59 compute-0 podman[247210]: 2025-11-22 08:39:59.15839835 +0000 UTC m=+0.104824258 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:39:59 compute-0 podman[247209]: 2025-11-22 08:39:59.165811419 +0000 UTC m=+0.112723940 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:39:59 compute-0 sudo[247419]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpjagifofgwxokykcbcjgvpevchkrcme ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763800799.163564-60091-9370987392202/AnsiballZ_command.py'
Nov 22 08:39:59 compute-0 podman[203476]: time="2025-11-22T08:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:39:59 compute-0 sudo[247419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 08:39:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:39:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 22 08:39:59 compute-0 python3[247421]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 08:39:59 compute-0 sudo[247419]: pam_unix(sudo:session): session closed for user root
Nov 22 08:40:01 compute-0 openstack_network_exporter[205661]: ERROR   08:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:40:01 compute-0 openstack_network_exporter[205661]: ERROR   08:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:40:01 compute-0 openstack_network_exporter[205661]: ERROR   08:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:40:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:40:01 compute-0 openstack_network_exporter[205661]: ERROR   08:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:40:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:40:01 compute-0 openstack_network_exporter[205661]: ERROR   08:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:40:01 compute-0 nova_compute[189268]: 2025-11-22 08:40:01.997 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:02 compute-0 nova_compute[189268]: 2025-11-22 08:40:02.025 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:04 compute-0 podman[247460]: 2025-11-22 08:40:04.119193105 +0000 UTC m=+0.079004151 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64)
Nov 22 08:40:04 compute-0 podman[247461]: 2025-11-22 08:40:04.146689317 +0000 UTC m=+0.099601037 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:40:07 compute-0 nova_compute[189268]: 2025-11-22 08:40:07.000 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:07 compute-0 nova_compute[189268]: 2025-11-22 08:40:07.027 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:40:09.980 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:40:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:40:09.980 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:40:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:40:09.981 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:40:10 compute-0 podman[247501]: 2025-11-22 08:40:10.209571472 +0000 UTC m=+0.150336225 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 08:40:12 compute-0 nova_compute[189268]: 2025-11-22 08:40:12.003 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:12 compute-0 nova_compute[189268]: 2025-11-22 08:40:12.030 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:12 compute-0 podman[247522]: 2025-11-22 08:40:12.160167265 +0000 UTC m=+0.105222139 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:40:17 compute-0 nova_compute[189268]: 2025-11-22 08:40:17.004 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:17 compute-0 nova_compute[189268]: 2025-11-22 08:40:17.033 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:20 compute-0 podman[247546]: 2025-11-22 08:40:20.128718979 +0000 UTC m=+0.071091518 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:40:20 compute-0 podman[247545]: 2025-11-22 08:40:20.130247091 +0000 UTC m=+0.080941774 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:40:20 compute-0 podman[247552]: 2025-11-22 08:40:20.140116257 +0000 UTC m=+0.069891276 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:40:22 compute-0 nova_compute[189268]: 2025-11-22 08:40:22.007 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:22 compute-0 nova_compute[189268]: 2025-11-22 08:40:22.036 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.093 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.093 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.093 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b78c2c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.104 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.108 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '64e4ab2b-2a08-4c3c-9561-94454cb0b482', 'name': 'vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {'metering.server_group': '209b9e59-811e-4c2b-a756-c29ba92c4b5c'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.109 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.109 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.109 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.109 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:40:22.109653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.118 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 2472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.123 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.124 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.124 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.124 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.124 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.124 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.124 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.125 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.126 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.126 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.126 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.126 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.126 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.126 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.127 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.127 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.128 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.128 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.128 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.128 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.128 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.128 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.129 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.129 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.129 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.129 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.130 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.130 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.130 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:40:22.124753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:40:22.126974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:40:22.128661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:40:22.130344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.163 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 46790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.192 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/cpu volume: 40530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.193 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.194 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.196 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.197 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.198 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:40:22.197199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.200 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.201 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.203 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.203 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.204 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.205 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:40:22.204715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.232 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.233 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.233 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.256 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.256 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.256 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.257 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.257 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.257 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.257 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.257 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.257 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:40:22.257804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.332 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.332 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.333 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.398 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.398 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.399 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.399 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.399 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.399 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.399 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.399 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.400 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.400 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.400 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.400 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.400 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.400 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.401 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.401 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.401 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.401 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.401 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.401 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.402 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.402 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.402 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:40:22.399985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:40:22.401966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.402 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.402 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.403 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.403 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.403 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.403 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.403 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.403 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.403 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:40:22.403678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.404 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.404 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.404 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 1133591681 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.405 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 382437315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.405 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.latency volume: 288491761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.405 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.405 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.405 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.406 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.406 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.406 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.406 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.406 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:40:22.406232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.407 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.408 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.408 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.408 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:40:22.407766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.409 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.410 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.410 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.410 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.411 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.411 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.411 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.411 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.411 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.411 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.412 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.412 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.412 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.412 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.412 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.412 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.413 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.413 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.413 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.413 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:40:22.410055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:40:22.412244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.414 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.415 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.415 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:40:22.414336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.416 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.416 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.416 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.416 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.416 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.416 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.417 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.417 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.417 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.417 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.417 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:40:22.417007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:40:22.418368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.418 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.419 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.419 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 57392898403 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.419 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 229562299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.419 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:40:22.420408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.420 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.421 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.422 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.422 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.422 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.422 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.422 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:40:22.421290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.422 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.423 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:40:22.422502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:40:22.423350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.424 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:40:22.424430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.425 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.426 15 DEBUG ceilometer.compute.pollsters [-] 64e4ab2b-2a08-4c3c-9561-94454cb0b482/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.426 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.426 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:40:22.425747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.427 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.428 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.429 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.429 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:40:22.429 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:40:27 compute-0 nova_compute[189268]: 2025-11-22 08:40:27.009 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:27 compute-0 nova_compute[189268]: 2025-11-22 08:40:27.037 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:29 compute-0 podman[203476]: time="2025-11-22T08:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:40:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:40:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Nov 22 08:40:30 compute-0 nova_compute[189268]: 2025-11-22 08:40:30.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:30 compute-0 nova_compute[189268]: 2025-11-22 08:40:30.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:40:30 compute-0 nova_compute[189268]: 2025-11-22 08:40:30.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:40:30 compute-0 podman[247603]: 2025-11-22 08:40:30.111292874 +0000 UTC m=+0.067270525 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 08:40:30 compute-0 podman[247602]: 2025-11-22 08:40:30.114354527 +0000 UTC m=+0.073817061 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251118)
Nov 22 08:40:31 compute-0 nova_compute[189268]: 2025-11-22 08:40:31.092 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:40:31 compute-0 nova_compute[189268]: 2025-11-22 08:40:31.092 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:40:31 compute-0 nova_compute[189268]: 2025-11-22 08:40:31.092 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:40:31 compute-0 nova_compute[189268]: 2025-11-22 08:40:31.092 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:40:31 compute-0 openstack_network_exporter[205661]: ERROR   08:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:40:31 compute-0 openstack_network_exporter[205661]: ERROR   08:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:40:31 compute-0 openstack_network_exporter[205661]: ERROR   08:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:40:31 compute-0 rsyslogd[236668]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 08:40:31 compute-0 openstack_network_exporter[205661]: ERROR   08:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:40:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:40:31 compute-0 openstack_network_exporter[205661]: ERROR   08:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:40:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:40:32 compute-0 nova_compute[189268]: 2025-11-22 08:40:32.012 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:32 compute-0 nova_compute[189268]: 2025-11-22 08:40:32.039 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:34 compute-0 nova_compute[189268]: 2025-11-22 08:40:34.370 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [{"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:40:34 compute-0 nova_compute[189268]: 2025-11-22 08:40:34.385 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:40:34 compute-0 nova_compute[189268]: 2025-11-22 08:40:34.386 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:40:34 compute-0 nova_compute[189268]: 2025-11-22 08:40:34.386 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:34 compute-0 podman[247639]: 2025-11-22 08:40:34.480038096 +0000 UTC m=+0.083221384 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, distribution-scope=public, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:40:34 compute-0 podman[247640]: 2025-11-22 08:40:34.512557234 +0000 UTC m=+0.113752689 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:40:35 compute-0 nova_compute[189268]: 2025-11-22 08:40:35.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:35 compute-0 nova_compute[189268]: 2025-11-22 08:40:35.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:35 compute-0 nova_compute[189268]: 2025-11-22 08:40:35.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:40:35 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 08:40:36 compute-0 nova_compute[189268]: 2025-11-22 08:40:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.015 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.041 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.110 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:37 compute-0 nova_compute[189268]: 2025-11-22 08:40:37.111 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:40:40 compute-0 nova_compute[189268]: 2025-11-22 08:40:40.119 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:41 compute-0 podman[247685]: 2025-11-22 08:40:41.115238157 +0000 UTC m=+0.066569376 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 22 08:40:42 compute-0 nova_compute[189268]: 2025-11-22 08:40:42.021 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:42 compute-0 nova_compute[189268]: 2025-11-22 08:40:42.044 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:43 compute-0 nova_compute[189268]: 2025-11-22 08:40:43.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:43 compute-0 podman[247706]: 2025-11-22 08:40:43.101922252 +0000 UTC m=+0.058862868 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.113 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.138 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.223 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.291 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.292 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.372 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.373 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.437 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.438 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.502 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.512 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.582 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.584 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.680 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.682 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.745 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.747 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:40:44 compute-0 nova_compute[189268]: 2025-11-22 08:40:44.805 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.142 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.143 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4968MB free_disk=72.4552001953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.143 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.143 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.303 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.303 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.303 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.303 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.362 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.410 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.410 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.424 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.442 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.495 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.507 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.509 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:40:45 compute-0 nova_compute[189268]: 2025-11-22 08:40:45.509 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:40:47 compute-0 nova_compute[189268]: 2025-11-22 08:40:47.023 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:47 compute-0 nova_compute[189268]: 2025-11-22 08:40:47.047 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:47 compute-0 nova_compute[189268]: 2025-11-22 08:40:47.495 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:40:51 compute-0 podman[247756]: 2025-11-22 08:40:51.124984869 +0000 UTC m=+0.064893141 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:40:51 compute-0 podman[247760]: 2025-11-22 08:40:51.136169711 +0000 UTC m=+0.071093988 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:40:51 compute-0 podman[247755]: 2025-11-22 08:40:51.154089064 +0000 UTC m=+0.101329423 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 08:40:52 compute-0 nova_compute[189268]: 2025-11-22 08:40:52.026 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:52 compute-0 nova_compute[189268]: 2025-11-22 08:40:52.048 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:57 compute-0 nova_compute[189268]: 2025-11-22 08:40:57.028 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:57 compute-0 nova_compute[189268]: 2025-11-22 08:40:57.049 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:40:59 compute-0 sshd-session[246358]: Received disconnect from 38.129.56.128 port 48070:11: disconnected by user
Nov 22 08:40:59 compute-0 sshd-session[246358]: Disconnected from user zuul 38.129.56.128 port 48070
Nov 22 08:40:59 compute-0 sshd-session[246355]: pam_unix(sshd:session): session closed for user zuul
Nov 22 08:40:59 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Nov 22 08:40:59 compute-0 systemd[1]: session-31.scope: Consumed 3.848s CPU time.
Nov 22 08:40:59 compute-0 systemd-logind[826]: Session 31 logged out. Waiting for processes to exit.
Nov 22 08:40:59 compute-0 systemd-logind[826]: Removed session 31.
Nov 22 08:40:59 compute-0 podman[203476]: time="2025-11-22T08:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:40:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:40:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 22 08:41:01 compute-0 podman[247812]: 2025-11-22 08:41:01.123767056 +0000 UTC m=+0.077239165 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:41:01 compute-0 podman[247813]: 2025-11-22 08:41:01.124702971 +0000 UTC m=+0.075386244 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Nov 22 08:41:01 compute-0 openstack_network_exporter[205661]: ERROR   08:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:41:01 compute-0 openstack_network_exporter[205661]: ERROR   08:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:41:01 compute-0 openstack_network_exporter[205661]: ERROR   08:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:41:01 compute-0 openstack_network_exporter[205661]: ERROR   08:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:41:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:41:01 compute-0 openstack_network_exporter[205661]: ERROR   08:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:41:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:41:02 compute-0 nova_compute[189268]: 2025-11-22 08:41:02.032 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:02 compute-0 nova_compute[189268]: 2025-11-22 08:41:02.052 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:05 compute-0 podman[247849]: 2025-11-22 08:41:05.128021788 +0000 UTC m=+0.080713817 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, release=1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 22 08:41:05 compute-0 podman[247850]: 2025-11-22 08:41:05.197081291 +0000 UTC m=+0.145522135 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:41:07 compute-0 nova_compute[189268]: 2025-11-22 08:41:07.034 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:07 compute-0 nova_compute[189268]: 2025-11-22 08:41:07.053 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:41:09.981 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:41:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:41:09.982 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:41:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:41:09.983 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:41:12 compute-0 nova_compute[189268]: 2025-11-22 08:41:12.036 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:12 compute-0 nova_compute[189268]: 2025-11-22 08:41:12.057 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:12 compute-0 podman[247893]: 2025-11-22 08:41:12.115581323 +0000 UTC m=+0.072769444 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public)
Nov 22 08:41:14 compute-0 podman[247915]: 2025-11-22 08:41:14.101171937 +0000 UTC m=+0.059152316 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:41:17 compute-0 nova_compute[189268]: 2025-11-22 08:41:17.038 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:17 compute-0 nova_compute[189268]: 2025-11-22 08:41:17.058 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:22 compute-0 nova_compute[189268]: 2025-11-22 08:41:22.041 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:22 compute-0 nova_compute[189268]: 2025-11-22 08:41:22.061 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:22 compute-0 podman[247940]: 2025-11-22 08:41:22.111369136 +0000 UTC m=+0.057725497 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:41:22 compute-0 podman[247941]: 2025-11-22 08:41:22.123934505 +0000 UTC m=+0.067214474 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 08:41:22 compute-0 podman[247939]: 2025-11-22 08:41:22.151056657 +0000 UTC m=+0.099571537 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 22 08:41:27 compute-0 nova_compute[189268]: 2025-11-22 08:41:27.043 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:27 compute-0 nova_compute[189268]: 2025-11-22 08:41:27.062 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:29 compute-0 podman[203476]: time="2025-11-22T08:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:41:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:41:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 22 08:41:31 compute-0 openstack_network_exporter[205661]: ERROR   08:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:41:31 compute-0 openstack_network_exporter[205661]: ERROR   08:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:41:31 compute-0 openstack_network_exporter[205661]: ERROR   08:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:41:31 compute-0 openstack_network_exporter[205661]: ERROR   08:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:41:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:41:31 compute-0 openstack_network_exporter[205661]: ERROR   08:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:41:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:41:32 compute-0 nova_compute[189268]: 2025-11-22 08:41:32.045 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:32 compute-0 nova_compute[189268]: 2025-11-22 08:41:32.064 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:32 compute-0 nova_compute[189268]: 2025-11-22 08:41:32.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:32 compute-0 nova_compute[189268]: 2025-11-22 08:41:32.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:41:32 compute-0 podman[247997]: 2025-11-22 08:41:32.111587961 +0000 UTC m=+0.067461710 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:41:32 compute-0 podman[247998]: 2025-11-22 08:41:32.14083147 +0000 UTC m=+0.092723571 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:41:33 compute-0 nova_compute[189268]: 2025-11-22 08:41:33.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:41:33 compute-0 nova_compute[189268]: 2025-11-22 08:41:33.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:41:33 compute-0 nova_compute[189268]: 2025-11-22 08:41:33.128 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:41:35 compute-0 nova_compute[189268]: 2025-11-22 08:41:35.201 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updating instance_info_cache with network_info: [{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:41:35 compute-0 nova_compute[189268]: 2025-11-22 08:41:35.221 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:41:35 compute-0 nova_compute[189268]: 2025-11-22 08:41:35.222 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:41:35 compute-0 nova_compute[189268]: 2025-11-22 08:41:35.223 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:35 compute-0 nova_compute[189268]: 2025-11-22 08:41:35.224 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:35 compute-0 nova_compute[189268]: 2025-11-22 08:41:35.224 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:41:36 compute-0 nova_compute[189268]: 2025-11-22 08:41:36.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:36 compute-0 nova_compute[189268]: 2025-11-22 08:41:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:36 compute-0 podman[248036]: 2025-11-22 08:41:36.128959593 +0000 UTC m=+0.077724568 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 22 08:41:36 compute-0 podman[248037]: 2025-11-22 08:41:36.154981924 +0000 UTC m=+0.106909184 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:41:37 compute-0 nova_compute[189268]: 2025-11-22 08:41:37.048 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:37 compute-0 nova_compute[189268]: 2025-11-22 08:41:37.065 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:37 compute-0 nova_compute[189268]: 2025-11-22 08:41:37.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:37 compute-0 sshd-session[248034]: Invalid user hadoop from 80.94.92.164 port 60102
Nov 22 08:41:38 compute-0 sshd-session[248034]: Connection closed by invalid user hadoop 80.94.92.164 port 60102 [preauth]
Nov 22 08:41:41 compute-0 nova_compute[189268]: 2025-11-22 08:41:41.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:42 compute-0 nova_compute[189268]: 2025-11-22 08:41:42.050 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:42 compute-0 nova_compute[189268]: 2025-11-22 08:41:42.068 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:42 compute-0 nova_compute[189268]: 2025-11-22 08:41:42.096 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:43 compute-0 podman[248081]: 2025-11-22 08:41:43.122849162 +0000 UTC m=+0.075167739 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public)
Nov 22 08:41:44 compute-0 podman[248102]: 2025-11-22 08:41:44.770935776 +0000 UTC m=+0.087446410 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.128 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.213 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.277 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.278 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.337 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.338 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.400 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.403 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.472 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.479 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.540 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.541 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.602 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.604 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.677 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.679 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:41:45 compute-0 nova_compute[189268]: 2025-11-22 08:41:45.746 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.066 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.067 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4958MB free_disk=72.4552001953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.067 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.068 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.152 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.153 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.153 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.153 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.213 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.227 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.229 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:41:46 compute-0 nova_compute[189268]: 2025-11-22 08:41:46.229 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:41:47 compute-0 nova_compute[189268]: 2025-11-22 08:41:47.053 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:47 compute-0 nova_compute[189268]: 2025-11-22 08:41:47.070 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:47 compute-0 nova_compute[189268]: 2025-11-22 08:41:47.228 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:41:52 compute-0 nova_compute[189268]: 2025-11-22 08:41:52.056 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:52 compute-0 nova_compute[189268]: 2025-11-22 08:41:52.072 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:53 compute-0 podman[248154]: 2025-11-22 08:41:53.12171513 +0000 UTC m=+0.068493218 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 08:41:53 compute-0 podman[248152]: 2025-11-22 08:41:53.127814115 +0000 UTC m=+0.081597582 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 08:41:53 compute-0 podman[248153]: 2025-11-22 08:41:53.150902337 +0000 UTC m=+0.102503955 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:41:57 compute-0 nova_compute[189268]: 2025-11-22 08:41:57.059 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:57 compute-0 nova_compute[189268]: 2025-11-22 08:41:57.074 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:41:59 compute-0 podman[203476]: time="2025-11-22T08:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:41:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:41:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 22 08:42:01 compute-0 openstack_network_exporter[205661]: ERROR   08:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:42:01 compute-0 openstack_network_exporter[205661]: ERROR   08:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:42:01 compute-0 openstack_network_exporter[205661]: ERROR   08:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:42:01 compute-0 openstack_network_exporter[205661]: ERROR   08:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:42:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:42:01 compute-0 openstack_network_exporter[205661]: ERROR   08:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:42:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:42:02 compute-0 nova_compute[189268]: 2025-11-22 08:42:02.061 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:02 compute-0 nova_compute[189268]: 2025-11-22 08:42:02.076 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:03 compute-0 podman[248210]: 2025-11-22 08:42:03.122958678 +0000 UTC m=+0.072105755 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:42:03 compute-0 podman[248209]: 2025-11-22 08:42:03.125143078 +0000 UTC m=+0.079524266 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute)
Nov 22 08:42:06 compute-0 nova_compute[189268]: 2025-11-22 08:42:06.236 189273 DEBUG nova.compute.manager [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received event network-changed-433ff318-0c74-4ba4-ac48-8114bc74a566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:42:06 compute-0 nova_compute[189268]: 2025-11-22 08:42:06.237 189273 DEBUG nova.compute.manager [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Refreshing instance network info cache due to event network-changed-433ff318-0c74-4ba4-ac48-8114bc74a566. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:42:06 compute-0 nova_compute[189268]: 2025-11-22 08:42:06.237 189273 DEBUG oslo_concurrency.lockutils [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:42:06 compute-0 nova_compute[189268]: 2025-11-22 08:42:06.237 189273 DEBUG oslo_concurrency.lockutils [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:42:06 compute-0 nova_compute[189268]: 2025-11-22 08:42:06.238 189273 DEBUG nova.network.neutron [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Refreshing network info cache for port 433ff318-0c74-4ba4-ac48-8114bc74a566 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:42:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:06.592 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:42:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:06.593 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:42:06 compute-0 nova_compute[189268]: 2025-11-22 08:42:06.594 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.012 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.012 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.013 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.013 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.013 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.015 189273 INFO nova.compute.manager [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Terminating instance
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.016 189273 DEBUG nova.compute.manager [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.063 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 kernel: tap433ff318-0c (unregistering): left promiscuous mode
Nov 22 08:42:07 compute-0 NetworkManager[56326]: <info>  [1763800927.0729] device (tap433ff318-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.092 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 ovn_controller[97783]: 2025-11-22T08:42:07Z|00065|binding|INFO|Releasing lport 433ff318-0c74-4ba4-ac48-8114bc74a566 from this chassis (sb_readonly=0)
Nov 22 08:42:07 compute-0 ovn_controller[97783]: 2025-11-22T08:42:07Z|00066|binding|INFO|Setting lport 433ff318-0c74-4ba4-ac48-8114bc74a566 down in Southbound
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.094 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 ovn_controller[97783]: 2025-11-22T08:42:07Z|00067|binding|INFO|Removing iface tap433ff318-0c ovn-installed in OVS
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.098 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.110 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.121 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:1a:4a 192.168.0.63'], port_security=['fa:16:3e:4d:1a:4a 192.168.0.63'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-eigzbqv6tptr-cfkm2etzuijf-gntxycdg4jfb-port-v6rwy3qsqi6x', 'neutron:cidrs': '192.168.0.63/24', 'neutron:device_id': '64e4ab2b-2a08-4c3c-9561-94454cb0b482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-eigzbqv6tptr-cfkm2etzuijf-gntxycdg4jfb-port-v6rwy3qsqi6x', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=433ff318-0c74-4ba4-ac48-8114bc74a566) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.122 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 433ff318-0c74-4ba4-ac48-8114bc74a566 in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 unbound from our chassis
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.123 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.139 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3222d28a-9c4c-4769-a9e9-f03b83e1c1c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:07 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 22 08:42:07 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 1min 36.108s CPU time.
Nov 22 08:42:07 compute-0 systemd-machined[155703]: Machine qemu-5-instance-00000005 terminated.
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.169 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[28213f2c-b13c-4230-81ba-1b11ed663ada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.173 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[27f6fc07-0bc8-4bbf-8a27-66a001d7ae03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:07 compute-0 podman[248248]: 2025-11-22 08:42:07.192325497 +0000 UTC m=+0.138236209 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:42:07 compute-0 podman[248250]: 2025-11-22 08:42:07.200876997 +0000 UTC m=+0.141834326 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.211 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[6b10e0f0-6814-4545-b5d6-3c3068f43890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.232 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[79fdfedd-6c3e-4bea-9376-d1c8218ed3d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02517cc7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:86:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 19, 'rx_bytes': 532, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 19, 'rx_bytes': 532, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501085, 'reachable_time': 25163, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248301, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.250 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[051e10e2-0444-4024-9345-0401e4d0d154]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501097, 'tstamp': 501097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248307, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02517cc7-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501100, 'tstamp': 501100}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248307, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.253 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.255 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.261 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.262 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02517cc7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.263 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.263 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02517cc7-80, col_values=(('external_ids', {'iface-id': '5e2a8859-83a6-4000-bcad-5571f3c7bd5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:42:07 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:07.264 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.305 189273 INFO nova.virt.libvirt.driver [-] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Instance destroyed successfully.
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.306 189273 DEBUG nova.objects.instance [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'resources' on Instance uuid 64e4ab2b-2a08-4c3c-9561-94454cb0b482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.318 189273 DEBUG nova.virt.libvirt.vif [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-qv6tptr-cfkm2etzuijf-gntxycdg4jfb-vnf-tuynx42zciyf',id=5,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:34:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='209b9e59-811e-4c2b-a756-c29ba92c4b5c'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-dm7ragq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:34:12Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTIyMjcyMjgzNTIzMzgyMzc3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 22 08:42:07 compute-0 nova_compute[189268]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTIyMjcyMjgzNTIzMzgyMzc3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUyMjI3MjI4MzUyMzM4MjM3NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MjIyNzIyODM1MjMzODIzNzcyPT0tLQo=',user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=64e4ab2b-2a08-4c3c-9561-94454cb0b482,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.318 189273 DEBUG nova.network.os_vif_util [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.319 189273 DEBUG nova.network.os_vif_util [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:1a:4a,bridge_name='br-int',has_traffic_filtering=True,id=433ff318-0c74-4ba4-ac48-8114bc74a566,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap433ff318-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.319 189273 DEBUG os_vif [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:1a:4a,bridge_name='br-int',has_traffic_filtering=True,id=433ff318-0c74-4ba4-ac48-8114bc74a566,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap433ff318-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.322 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.322 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap433ff318-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.324 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.326 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.329 189273 INFO os_vif [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:1a:4a,bridge_name='br-int',has_traffic_filtering=True,id=433ff318-0c74-4ba4-ac48-8114bc74a566,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap433ff318-0c')
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.329 189273 INFO nova.virt.libvirt.driver [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Deleting instance files /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482_del
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.330 189273 INFO nova.virt.libvirt.driver [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Deletion of /var/lib/nova/instances/64e4ab2b-2a08-4c3c-9561-94454cb0b482_del complete
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.338 189273 DEBUG nova.compute.manager [req-015a0ad0-6521-48c2-8bad-a99f956794f7 req-ec9176c8-48af-4121-9880-59de94933e9c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received event network-vif-unplugged-433ff318-0c74-4ba4-ac48-8114bc74a566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.338 189273 DEBUG oslo_concurrency.lockutils [req-015a0ad0-6521-48c2-8bad-a99f956794f7 req-ec9176c8-48af-4121-9880-59de94933e9c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.338 189273 DEBUG oslo_concurrency.lockutils [req-015a0ad0-6521-48c2-8bad-a99f956794f7 req-ec9176c8-48af-4121-9880-59de94933e9c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.338 189273 DEBUG oslo_concurrency.lockutils [req-015a0ad0-6521-48c2-8bad-a99f956794f7 req-ec9176c8-48af-4121-9880-59de94933e9c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.339 189273 DEBUG nova.compute.manager [req-015a0ad0-6521-48c2-8bad-a99f956794f7 req-ec9176c8-48af-4121-9880-59de94933e9c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] No waiting events found dispatching network-vif-unplugged-433ff318-0c74-4ba4-ac48-8114bc74a566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.339 189273 DEBUG nova.compute.manager [req-015a0ad0-6521-48c2-8bad-a99f956794f7 req-ec9176c8-48af-4121-9880-59de94933e9c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received event network-vif-unplugged-433ff318-0c74-4ba4-ac48-8114bc74a566 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.395 189273 INFO nova.compute.manager [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Took 0.38 seconds to destroy the instance on the hypervisor.
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.396 189273 DEBUG oslo.service.loopingcall [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.396 189273 DEBUG nova.compute.manager [-] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.397 189273 DEBUG nova.network.neutron [-] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:42:07 compute-0 rsyslogd[236668]: message too long (8192) with configured size 8096, begin of message is: 2025-11-22 08:42:07.318 189273 DEBUG nova.virt.libvirt.vif [None req-aec17f41-c4 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.579 189273 DEBUG nova.network.neutron [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updated VIF entry in instance network info cache for port 433ff318-0c74-4ba4-ac48-8114bc74a566. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.580 189273 DEBUG nova.network.neutron [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updating instance_info_cache with network_info: [{"id": "433ff318-0c74-4ba4-ac48-8114bc74a566", "address": "fa:16:3e:4d:1a:4a", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.63", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433ff318-0c", "ovs_interfaceid": "433ff318-0c74-4ba4-ac48-8114bc74a566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:42:07 compute-0 nova_compute[189268]: 2025-11-22 08:42:07.597 189273 DEBUG oslo_concurrency.lockutils [req-28bf98d6-d8fb-4d28-b22b-71718e7437f5 req-6966168d-b600-492b-921b-8b9bdd68899c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-64e4ab2b-2a08-4c3c-9561-94454cb0b482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.000 189273 DEBUG nova.network.neutron [-] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.017 189273 INFO nova.compute.manager [-] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Took 1.62 seconds to deallocate network for instance.
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.049 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.049 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.125 189273 DEBUG nova.compute.provider_tree [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.144 189273 DEBUG nova.scheduler.client.report [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.171 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.200 189273 INFO nova.scheduler.client.report [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Deleted allocations for instance 64e4ab2b-2a08-4c3c-9561-94454cb0b482
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.258 189273 DEBUG oslo_concurrency.lockutils [None req-aec17f41-c4d4-4549-9518-c5d8064ab549 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.404 189273 DEBUG nova.compute.manager [req-06bfa711-e28d-49cd-8577-0c6fa8f62f47 req-9e63c1d4-9146-4ba0-9dc9-3c771c99879a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received event network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.405 189273 DEBUG oslo_concurrency.lockutils [req-06bfa711-e28d-49cd-8577-0c6fa8f62f47 req-9e63c1d4-9146-4ba0-9dc9-3c771c99879a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.406 189273 DEBUG oslo_concurrency.lockutils [req-06bfa711-e28d-49cd-8577-0c6fa8f62f47 req-9e63c1d4-9146-4ba0-9dc9-3c771c99879a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.406 189273 DEBUG oslo_concurrency.lockutils [req-06bfa711-e28d-49cd-8577-0c6fa8f62f47 req-9e63c1d4-9146-4ba0-9dc9-3c771c99879a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "64e4ab2b-2a08-4c3c-9561-94454cb0b482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.407 189273 DEBUG nova.compute.manager [req-06bfa711-e28d-49cd-8577-0c6fa8f62f47 req-9e63c1d4-9146-4ba0-9dc9-3c771c99879a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] No waiting events found dispatching network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:42:09 compute-0 nova_compute[189268]: 2025-11-22 08:42:09.407 189273 WARNING nova.compute.manager [req-06bfa711-e28d-49cd-8577-0c6fa8f62f47 req-9e63c1d4-9146-4ba0-9dc9-3c771c99879a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Received unexpected event network-vif-plugged-433ff318-0c74-4ba4-ac48-8114bc74a566 for instance with vm_state deleted and task_state None.
Nov 22 08:42:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:09.982 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:09.983 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:09.984 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:12 compute-0 nova_compute[189268]: 2025-11-22 08:42:12.097 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:12 compute-0 nova_compute[189268]: 2025-11-22 08:42:12.325 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:14 compute-0 podman[248326]: 2025-11-22 08:42:14.14081869 +0000 UTC m=+0.096518424 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, release=1755695350)
Nov 22 08:42:15 compute-0 podman[248345]: 2025-11-22 08:42:15.118812073 +0000 UTC m=+0.067024018 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:42:16 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:16.595 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:42:17 compute-0 nova_compute[189268]: 2025-11-22 08:42:17.099 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:17 compute-0 nova_compute[189268]: 2025-11-22 08:42:17.327 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.093 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.094 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7bdc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:42:22 compute-0 nova_compute[189268]: 2025-11-22 08:42:22.101 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.101 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'name': 'test_0', 'flavor': {'id': '796e25a8-f28d-499e-b2fb-dfae32f0eed7', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'de9f57cf-28b4-4cbd-b943-19aa098356bf'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '80e46844b3824928a6138235e5ede512', 'user_id': '27ed1dd009ad4e29863ab5e3a9826c94', 'hostId': '984f772f59769827b253e5a80433ef06cecf72950dcfa6e7ff2850b4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.102 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.102 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.102 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.102 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:42:22.102330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.107 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes volume: 2556 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.107 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.108 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:42:22.108363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:42:22.109429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:42:22.110329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.110 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.111 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.111 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.111 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:42:22.111189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.132 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/cpu volume: 48090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.132 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.132 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.132 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.133 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.133 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.133 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:42:22.133234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.133 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.134 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.134 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.134 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:42:22.134813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.156 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.156 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.156 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.157 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.157 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.157 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.157 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.157 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.157 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:42:22.157942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.220 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.221 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.221 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.222 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.222 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.222 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.222 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.222 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.222 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.222 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.223 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:42:22.222657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.223 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.224 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.224 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.224 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.224 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.224 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.224 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.225 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.225 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.225 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.225 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.225 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.225 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.226 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.226 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:42:22.224574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.226 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 1339396359 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:42:22.226165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.226 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 138141875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.227 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.latency volume: 117550863 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.227 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.227 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.227 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.227 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.227 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.228 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.228 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.228 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.228 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.229 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.229 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.229 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.229 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:42:22.228034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:42:22.229239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.229 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.230 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.230 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.230 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.231 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.231 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.231 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.231 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.231 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.232 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.232 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.232 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.232 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.233 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.233 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.233 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.233 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:42:22.231186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.233 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:42:22.233145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.234 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.234 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.234 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.234 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.234 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.234 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.235 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.235 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.235 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.236 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.236 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.236 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.236 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:42:22.234903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.236 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.237 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.237 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.237 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.237 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:42:22.237005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.238 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.238 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.238 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.238 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.238 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 18733649639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.238 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 19241219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.239 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.239 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.239 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.240 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.240 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.240 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:42:22.238557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.240 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.240 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.240 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.241 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:42:22.240490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.241 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.241 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.241 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.241 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:42:22.241422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.242 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.242 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.242 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.242 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.242 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.242 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.242 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:42:22.242599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:42:22.243631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.244 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.244 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.244 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.244 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.244 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.244 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.244 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.245 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.245 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.245 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.245 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.245 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.245 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.245 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.246 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.246 15 DEBUG ceilometer.compute.pollsters [-] 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66/memory.usage volume: 48.90625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:42:22.244737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:42:22.246046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.246 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.246 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.247 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.248 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:42:22.249 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:42:22 compute-0 nova_compute[189268]: 2025-11-22 08:42:22.301 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763800927.3007464, 64e4ab2b-2a08-4c3c-9561-94454cb0b482 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:42:22 compute-0 nova_compute[189268]: 2025-11-22 08:42:22.302 189273 INFO nova.compute.manager [-] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] VM Stopped (Lifecycle Event)
Nov 22 08:42:22 compute-0 nova_compute[189268]: 2025-11-22 08:42:22.319 189273 DEBUG nova.compute.manager [None req-45fcb3ec-32f4-4e73-8486-fde9c27c9e98 - - - - - -] [instance: 64e4ab2b-2a08-4c3c-9561-94454cb0b482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:42:22 compute-0 nova_compute[189268]: 2025-11-22 08:42:22.330 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:24 compute-0 podman[248370]: 2025-11-22 08:42:24.132136406 +0000 UTC m=+0.083802551 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 08:42:24 compute-0 podman[248371]: 2025-11-22 08:42:24.149251907 +0000 UTC m=+0.096480123 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:42:24 compute-0 podman[248372]: 2025-11-22 08:42:24.151994411 +0000 UTC m=+0.083038450 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.441 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.442 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.443 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.443 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.444 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.446 189273 INFO nova.compute.manager [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Terminating instance
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.448 189273 DEBUG nova.compute.manager [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:42:24 compute-0 kernel: tap4645bc8c-a8 (unregistering): left promiscuous mode
Nov 22 08:42:24 compute-0 NetworkManager[56326]: <info>  [1763800944.5234] device (tap4645bc8c-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:42:24 compute-0 ovn_controller[97783]: 2025-11-22T08:42:24Z|00068|binding|INFO|Releasing lport 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe from this chassis (sb_readonly=0)
Nov 22 08:42:24 compute-0 ovn_controller[97783]: 2025-11-22T08:42:24Z|00069|binding|INFO|Setting lport 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe down in Southbound
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.534 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:24 compute-0 ovn_controller[97783]: 2025-11-22T08:42:24Z|00070|binding|INFO|Removing iface tap4645bc8c-a8 ovn-installed in OVS
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.541 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:24.564 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:4a:5d 192.168.0.53'], port_security=['fa:16:3e:4f:4a:5d 192.168.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.53/24', 'neutron:device_id': '78b5db02-f49a-4c0b-b4f6-8d3b3d689e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80e46844b3824928a6138235e5ede512', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d35d3a2-03b3-4b0d-a4c4-f066616bbaa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a46a1c4a-0f65-4313-a2a5-5e5bba4e3fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:42:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:24.567 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 4645bc8c-a850-4f1b-9ebc-89d2ba862ffe in datapath 02517cc7-8060-4764-b9b0-b1d7f59e3ae8 unbound from our chassis
Nov 22 08:42:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:24.569 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02517cc7-8060-4764-b9b0-b1d7f59e3ae8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:42:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:24.571 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[50031198-ab65-4fc4-88e5-3e492c4cc54a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:24.574 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8 namespace which is not needed anymore
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.575 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 22 08:42:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 48.494s CPU time.
Nov 22 08:42:24 compute-0 systemd-machined[155703]: Machine qemu-1-instance-00000001 terminated.
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.724 189273 DEBUG nova.compute.manager [req-7621985e-b450-4b50-b859-b9f3d60eef9e req-b0ba79e5-3e2c-4125-9c14-76597f972845 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-vif-unplugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.725 189273 DEBUG oslo_concurrency.lockutils [req-7621985e-b450-4b50-b859-b9f3d60eef9e req-b0ba79e5-3e2c-4125-9c14-76597f972845 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.725 189273 DEBUG oslo_concurrency.lockutils [req-7621985e-b450-4b50-b859-b9f3d60eef9e req-b0ba79e5-3e2c-4125-9c14-76597f972845 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.725 189273 DEBUG oslo_concurrency.lockutils [req-7621985e-b450-4b50-b859-b9f3d60eef9e req-b0ba79e5-3e2c-4125-9c14-76597f972845 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.725 189273 DEBUG nova.compute.manager [req-7621985e-b450-4b50-b859-b9f3d60eef9e req-b0ba79e5-3e2c-4125-9c14-76597f972845 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] No waiting events found dispatching network-vif-unplugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.725 189273 DEBUG nova.compute.manager [req-7621985e-b450-4b50-b859-b9f3d60eef9e req-b0ba79e5-3e2c-4125-9c14-76597f972845 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-vif-unplugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.753 189273 INFO nova.virt.libvirt.driver [-] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Instance destroyed successfully.
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.754 189273 DEBUG nova.objects.instance [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lazy-loading 'resources' on Instance uuid 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:42:24 compute-0 neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8[239833]: [NOTICE]   (239837) : haproxy version is 2.8.14-c23fe91
Nov 22 08:42:24 compute-0 neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8[239833]: [NOTICE]   (239837) : path to executable is /usr/sbin/haproxy
Nov 22 08:42:24 compute-0 neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8[239833]: [WARNING]  (239837) : Exiting Master process...
Nov 22 08:42:24 compute-0 neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8[239833]: [ALERT]    (239837) : Current worker (239839) exited with code 143 (Terminated)
Nov 22 08:42:24 compute-0 neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8[239833]: [WARNING]  (239837) : All workers exited. Exiting... (0)
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.771 189273 DEBUG nova.virt.libvirt.vif [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:24:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:24:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80e46844b3824928a6138235e5ede512',ramdisk_id='',reservation_id='r-mmjvr90v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='de9f57cf-28b4-4cbd-b943-19aa098356bf',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:24:54Z,user_data=None,user_id='27ed1dd009ad4e29863ab5e3a9826c94',uuid=78b5db02-f49a-4c0b-b4f6-8d3b3d689e66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.772 189273 DEBUG nova.network.os_vif_util [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converting VIF {"id": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "address": "fa:16:3e:4f:4a:5d", "network": {"id": "02517cc7-8060-4764-b9b0-b1d7f59e3ae8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.53", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80e46844b3824928a6138235e5ede512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4645bc8c-a8", "ovs_interfaceid": "4645bc8c-a850-4f1b-9ebc-89d2ba862ffe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.772 189273 DEBUG nova.network.os_vif_util [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:4a:5d,bridge_name='br-int',has_traffic_filtering=True,id=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4645bc8c-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:42:24 compute-0 systemd[1]: libpod-2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a.scope: Deactivated successfully.
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.773 189273 DEBUG os_vif [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:4a:5d,bridge_name='br-int',has_traffic_filtering=True,id=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4645bc8c-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.774 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.774 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4645bc8c-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.775 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.778 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:42:24 compute-0 podman[248457]: 2025-11-22 08:42:24.778713312 +0000 UTC m=+0.078535788 container died 2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.781 189273 INFO os_vif [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:4a:5d,bridge_name='br-int',has_traffic_filtering=True,id=4645bc8c-a850-4f1b-9ebc-89d2ba862ffe,network=Network(02517cc7-8060-4764-b9b0-b1d7f59e3ae8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4645bc8c-a8')
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.781 189273 INFO nova.virt.libvirt.driver [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Deleting instance files /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66_del
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.782 189273 INFO nova.virt.libvirt.driver [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Deletion of /var/lib/nova/instances/78b5db02-f49a-4c0b-b4f6-8d3b3d689e66_del complete
Nov 22 08:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a-userdata-shm.mount: Deactivated successfully.
Nov 22 08:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-103cabd7f9946853411e67f45e0950d49d6f60b5772dc4aec63ec1a344d6c3d1-merged.mount: Deactivated successfully.
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.849 189273 INFO nova.compute.manager [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Took 0.40 seconds to destroy the instance on the hypervisor.
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.849 189273 DEBUG oslo.service.loopingcall [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.850 189273 DEBUG nova.compute.manager [-] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:42:24 compute-0 nova_compute[189268]: 2025-11-22 08:42:24.851 189273 DEBUG nova.network.neutron [-] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:42:24 compute-0 podman[248457]: 2025-11-22 08:42:24.897122096 +0000 UTC m=+0.196944592 container cleanup 2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 08:42:24 compute-0 systemd[1]: libpod-conmon-2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a.scope: Deactivated successfully.
Nov 22 08:42:25 compute-0 podman[248500]: 2025-11-22 08:42:25.470786435 +0000 UTC m=+0.540612940 container remove 2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.480 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[76723966-00ca-4fce-a7b7-21a83d3419da]: (4, ('Sat Nov 22 08:42:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8 (2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a)\n2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a\nSat Nov 22 08:42:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8 (2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a)\n2e1b0933d82ee1f2521bdc16470445f046c04ff32b8db5a776fbc580519eef6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.483 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1a431f38-ddc6-4f3f-b927-05ebd2430a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.485 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02517cc7-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:42:25 compute-0 nova_compute[189268]: 2025-11-22 08:42:25.489 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:25 compute-0 kernel: tap02517cc7-80: left promiscuous mode
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.501 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bf663f-5995-4856-a74b-5ec409a39078]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:25 compute-0 nova_compute[189268]: 2025-11-22 08:42:25.509 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.521 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef207b2-a335-4873-92c6-c73b1fcd1f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.523 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb9cb78-250a-4fb2-b40f-b769a36ad199]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.550 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1886db-5186-420d-ba48-0c52570b5f21]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501074, 'reachable_time': 20238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248514, 'error': None, 'target': 'ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d02517cc7\x2d8060\x2d4764\x2db9b0\x2db1d7f59e3ae8.mount: Deactivated successfully.
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.563 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02517cc7-8060-4764-b9b0-b1d7f59e3ae8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:42:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:42:25.563 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[fb97b952-941d-4101-a284-09f57938b842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.170 189273 DEBUG nova.network.neutron [-] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.191 189273 INFO nova.compute.manager [-] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Took 1.34 seconds to deallocate network for instance.
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.245 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.246 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.252 189273 DEBUG nova.compute.manager [req-67ead8e9-c067-4fb3-b491-33d68a0d22e3 req-87f84d36-3f35-4e7e-bbcf-cbf918128a6c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-vif-deleted-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.312 189273 DEBUG nova.compute.provider_tree [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.325 189273 DEBUG nova.scheduler.client.report [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.344 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.372 189273 INFO nova.scheduler.client.report [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Deleted allocations for instance 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.432 189273 DEBUG oslo_concurrency.lockutils [None req-b0aebff8-4a4b-498e-8a7c-2dd4c0476d1f 27ed1dd009ad4e29863ab5e3a9826c94 80e46844b3824928a6138235e5ede512 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.799 189273 DEBUG nova.compute.manager [req-79c4a7c1-16cd-4512-986f-9a13eced6929 req-ee197ada-893b-444d-862d-7da5e9a9d32f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received event network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.799 189273 DEBUG oslo_concurrency.lockutils [req-79c4a7c1-16cd-4512-986f-9a13eced6929 req-ee197ada-893b-444d-862d-7da5e9a9d32f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.800 189273 DEBUG oslo_concurrency.lockutils [req-79c4a7c1-16cd-4512-986f-9a13eced6929 req-ee197ada-893b-444d-862d-7da5e9a9d32f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.800 189273 DEBUG oslo_concurrency.lockutils [req-79c4a7c1-16cd-4512-986f-9a13eced6929 req-ee197ada-893b-444d-862d-7da5e9a9d32f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "78b5db02-f49a-4c0b-b4f6-8d3b3d689e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.800 189273 DEBUG nova.compute.manager [req-79c4a7c1-16cd-4512-986f-9a13eced6929 req-ee197ada-893b-444d-862d-7da5e9a9d32f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] No waiting events found dispatching network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:42:26 compute-0 nova_compute[189268]: 2025-11-22 08:42:26.800 189273 WARNING nova.compute.manager [req-79c4a7c1-16cd-4512-986f-9a13eced6929 req-ee197ada-893b-444d-862d-7da5e9a9d32f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Received unexpected event network-vif-plugged-4645bc8c-a850-4f1b-9ebc-89d2ba862ffe for instance with vm_state deleted and task_state None.
Nov 22 08:42:27 compute-0 nova_compute[189268]: 2025-11-22 08:42:27.103 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:29 compute-0 podman[203476]: time="2025-11-22T08:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:42:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:42:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 22 08:42:29 compute-0 nova_compute[189268]: 2025-11-22 08:42:29.777 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:31 compute-0 openstack_network_exporter[205661]: ERROR   08:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:42:31 compute-0 openstack_network_exporter[205661]: ERROR   08:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:42:31 compute-0 openstack_network_exporter[205661]: ERROR   08:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:42:31 compute-0 openstack_network_exporter[205661]: ERROR   08:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:42:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:42:31 compute-0 openstack_network_exporter[205661]: ERROR   08:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:42:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:42:32 compute-0 nova_compute[189268]: 2025-11-22 08:42:32.105 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:33 compute-0 nova_compute[189268]: 2025-11-22 08:42:33.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:34 compute-0 nova_compute[189268]: 2025-11-22 08:42:34.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:34 compute-0 nova_compute[189268]: 2025-11-22 08:42:34.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:42:34 compute-0 nova_compute[189268]: 2025-11-22 08:42:34.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:42:34 compute-0 nova_compute[189268]: 2025-11-22 08:42:34.115 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:42:34 compute-0 podman[248518]: 2025-11-22 08:42:34.158604136 +0000 UTC m=+0.106765850 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 22 08:42:34 compute-0 podman[248517]: 2025-11-22 08:42:34.166883449 +0000 UTC m=+0.125398252 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:42:34 compute-0 nova_compute[189268]: 2025-11-22 08:42:34.780 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:35 compute-0 nova_compute[189268]: 2025-11-22 08:42:35.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:35 compute-0 nova_compute[189268]: 2025-11-22 08:42:35.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:42:36 compute-0 nova_compute[189268]: 2025-11-22 08:42:36.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:37 compute-0 nova_compute[189268]: 2025-11-22 08:42:37.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:37 compute-0 nova_compute[189268]: 2025-11-22 08:42:37.109 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:38 compute-0 podman[248557]: 2025-11-22 08:42:38.153627518 +0000 UTC m=+0.100427108 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 22 08:42:38 compute-0 podman[248558]: 2025-11-22 08:42:38.195666263 +0000 UTC m=+0.134584871 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 08:42:39 compute-0 nova_compute[189268]: 2025-11-22 08:42:39.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:39 compute-0 nova_compute[189268]: 2025-11-22 08:42:39.749 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763800944.747282, 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:42:39 compute-0 nova_compute[189268]: 2025-11-22 08:42:39.749 189273 INFO nova.compute.manager [-] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] VM Stopped (Lifecycle Event)
Nov 22 08:42:39 compute-0 nova_compute[189268]: 2025-11-22 08:42:39.766 189273 DEBUG nova.compute.manager [None req-0a6f5907-3cc4-4582-93eb-4a97f64a583f - - - - - -] [instance: 78b5db02-f49a-4c0b-b4f6-8d3b3d689e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:42:39 compute-0 nova_compute[189268]: 2025-11-22 08:42:39.783 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:42 compute-0 nova_compute[189268]: 2025-11-22 08:42:42.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:42 compute-0 nova_compute[189268]: 2025-11-22 08:42:42.111 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:44 compute-0 nova_compute[189268]: 2025-11-22 08:42:44.786 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:44 compute-0 podman[248600]: 2025-11-22 08:42:44.790800532 +0000 UTC m=+0.099657018 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Nov 22 08:42:46 compute-0 podman[248621]: 2025-11-22 08:42:46.120237392 +0000 UTC m=+0.073413220 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.115 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.509 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.510 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5405MB free_disk=72.49922561645508GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.511 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.511 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.564 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.565 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.593 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.604 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.625 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:42:47 compute-0 nova_compute[189268]: 2025-11-22 08:42:47.626 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:42:49 compute-0 nova_compute[189268]: 2025-11-22 08:42:49.789 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:52 compute-0 nova_compute[189268]: 2025-11-22 08:42:52.116 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:54 compute-0 nova_compute[189268]: 2025-11-22 08:42:54.793 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:55 compute-0 podman[248647]: 2025-11-22 08:42:55.108546888 +0000 UTC m=+0.061507080 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:42:55 compute-0 podman[248649]: 2025-11-22 08:42:55.110261044 +0000 UTC m=+0.057944144 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:42:55 compute-0 podman[248648]: 2025-11-22 08:42:55.128041373 +0000 UTC m=+0.078877828 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:42:56 compute-0 ovn_controller[97783]: 2025-11-22T08:42:56Z|00071|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Nov 22 08:42:57 compute-0 nova_compute[189268]: 2025-11-22 08:42:57.119 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:42:59 compute-0 podman[203476]: time="2025-11-22T08:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:42:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:42:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4333 "" "Go-http-client/1.1"
Nov 22 08:42:59 compute-0 nova_compute[189268]: 2025-11-22 08:42:59.797 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:01 compute-0 openstack_network_exporter[205661]: ERROR   08:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:43:01 compute-0 openstack_network_exporter[205661]: ERROR   08:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:43:01 compute-0 openstack_network_exporter[205661]: ERROR   08:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:43:01 compute-0 openstack_network_exporter[205661]: ERROR   08:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:43:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:43:01 compute-0 openstack_network_exporter[205661]: ERROR   08:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:43:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:43:02 compute-0 nova_compute[189268]: 2025-11-22 08:43:02.121 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:04 compute-0 nova_compute[189268]: 2025-11-22 08:43:04.799 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:05 compute-0 podman[248706]: 2025-11-22 08:43:05.114127296 +0000 UTC m=+0.064139181 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi)
Nov 22 08:43:05 compute-0 podman[248705]: 2025-11-22 08:43:05.119723757 +0000 UTC m=+0.076074942 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:43:07 compute-0 nova_compute[189268]: 2025-11-22 08:43:07.124 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:09 compute-0 podman[248740]: 2025-11-22 08:43:09.159364363 +0000 UTC m=+0.113877532 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible)
Nov 22 08:43:09 compute-0 podman[248741]: 2025-11-22 08:43:09.166363301 +0000 UTC m=+0.122130744 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:43:09 compute-0 nova_compute[189268]: 2025-11-22 08:43:09.802 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:43:09.984 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:43:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:43:09.985 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:43:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:43:09.985 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:43:12 compute-0 nova_compute[189268]: 2025-11-22 08:43:12.127 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:14 compute-0 nova_compute[189268]: 2025-11-22 08:43:14.805 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:15 compute-0 podman[248786]: 2025-11-22 08:43:15.113889107 +0000 UTC m=+0.072783233 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6)
Nov 22 08:43:17 compute-0 podman[248807]: 2025-11-22 08:43:17.122597845 +0000 UTC m=+0.072008353 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:43:17 compute-0 nova_compute[189268]: 2025-11-22 08:43:17.128 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:19 compute-0 nova_compute[189268]: 2025-11-22 08:43:19.810 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:22 compute-0 nova_compute[189268]: 2025-11-22 08:43:22.131 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:24 compute-0 nova_compute[189268]: 2025-11-22 08:43:24.814 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:26 compute-0 podman[248832]: 2025-11-22 08:43:26.124010462 +0000 UTC m=+0.075474136 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:43:26 compute-0 podman[248831]: 2025-11-22 08:43:26.124868865 +0000 UTC m=+0.079319440 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Nov 22 08:43:26 compute-0 podman[248833]: 2025-11-22 08:43:26.152632504 +0000 UTC m=+0.091086097 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:43:27 compute-0 nova_compute[189268]: 2025-11-22 08:43:27.136 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:29 compute-0 podman[203476]: time="2025-11-22T08:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:43:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:43:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Nov 22 08:43:29 compute-0 nova_compute[189268]: 2025-11-22 08:43:29.817 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:31 compute-0 openstack_network_exporter[205661]: ERROR   08:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:43:31 compute-0 openstack_network_exporter[205661]: ERROR   08:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:43:31 compute-0 openstack_network_exporter[205661]: ERROR   08:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:43:31 compute-0 openstack_network_exporter[205661]: ERROR   08:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:43:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:43:31 compute-0 openstack_network_exporter[205661]: ERROR   08:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:43:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:43:32 compute-0 nova_compute[189268]: 2025-11-22 08:43:32.138 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:33 compute-0 nova_compute[189268]: 2025-11-22 08:43:33.628 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:34 compute-0 nova_compute[189268]: 2025-11-22 08:43:34.821 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:36 compute-0 nova_compute[189268]: 2025-11-22 08:43:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:36 compute-0 nova_compute[189268]: 2025-11-22 08:43:36.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:43:36 compute-0 nova_compute[189268]: 2025-11-22 08:43:36.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:43:36 compute-0 podman[248891]: 2025-11-22 08:43:36.108788099 +0000 UTC m=+0.065033985 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Nov 22 08:43:36 compute-0 nova_compute[189268]: 2025-11-22 08:43:36.113 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:43:36 compute-0 nova_compute[189268]: 2025-11-22 08:43:36.113 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:36 compute-0 nova_compute[189268]: 2025-11-22 08:43:36.113 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:43:36 compute-0 podman[248890]: 2025-11-22 08:43:36.115605743 +0000 UTC m=+0.073658678 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4)
Nov 22 08:43:37 compute-0 nova_compute[189268]: 2025-11-22 08:43:37.140 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:38 compute-0 nova_compute[189268]: 2025-11-22 08:43:38.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:38 compute-0 nova_compute[189268]: 2025-11-22 08:43:38.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:39 compute-0 nova_compute[189268]: 2025-11-22 08:43:39.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:39 compute-0 nova_compute[189268]: 2025-11-22 08:43:39.823 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:40 compute-0 podman[248927]: 2025-11-22 08:43:40.132069553 +0000 UTC m=+0.078575859 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9)
Nov 22 08:43:40 compute-0 podman[248928]: 2025-11-22 08:43:40.171182738 +0000 UTC m=+0.111641332 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:43:42 compute-0 nova_compute[189268]: 2025-11-22 08:43:42.142 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:43 compute-0 nova_compute[189268]: 2025-11-22 08:43:43.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:44 compute-0 nova_compute[189268]: 2025-11-22 08:43:44.827 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:46 compute-0 nova_compute[189268]: 2025-11-22 08:43:46.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:46 compute-0 podman[248970]: 2025-11-22 08:43:46.120864182 +0000 UTC m=+0.074267704 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:43:47 compute-0 nova_compute[189268]: 2025-11-22 08:43:47.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:47 compute-0 nova_compute[189268]: 2025-11-22 08:43:47.147 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:48 compute-0 podman[248989]: 2025-11-22 08:43:48.190896058 +0000 UTC m=+0.137786764 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.123 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.507 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.508 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5396MB free_disk=72.49931335449219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.509 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.509 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.564 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.564 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.586 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.600 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.602 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.602 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:43:49 compute-0 nova_compute[189268]: 2025-11-22 08:43:49.830 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:52 compute-0 nova_compute[189268]: 2025-11-22 08:43:52.153 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:54 compute-0 nova_compute[189268]: 2025-11-22 08:43:54.834 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:57 compute-0 podman[249015]: 2025-11-22 08:43:57.100432114 +0000 UTC m=+0.057157337 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:43:57 compute-0 podman[249016]: 2025-11-22 08:43:57.111619531 +0000 UTC m=+0.065241391 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 08:43:57 compute-0 podman[249014]: 2025-11-22 08:43:57.138055212 +0000 UTC m=+0.096735566 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 08:43:57 compute-0 nova_compute[189268]: 2025-11-22 08:43:57.154 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:43:59 compute-0 podman[203476]: time="2025-11-22T08:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:43:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:43:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Nov 22 08:43:59 compute-0 nova_compute[189268]: 2025-11-22 08:43:59.836 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:01 compute-0 openstack_network_exporter[205661]: ERROR   08:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:44:01 compute-0 openstack_network_exporter[205661]: ERROR   08:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:44:01 compute-0 openstack_network_exporter[205661]: ERROR   08:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:44:01 compute-0 openstack_network_exporter[205661]: ERROR   08:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:44:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:44:01 compute-0 openstack_network_exporter[205661]: ERROR   08:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:44:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:44:02 compute-0 nova_compute[189268]: 2025-11-22 08:44:02.157 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:04 compute-0 nova_compute[189268]: 2025-11-22 08:44:04.839 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:07 compute-0 podman[249072]: 2025-11-22 08:44:07.127853994 +0000 UTC m=+0.082932080 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 22 08:44:07 compute-0 nova_compute[189268]: 2025-11-22 08:44:07.158 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:07 compute-0 podman[249073]: 2025-11-22 08:44:07.168994815 +0000 UTC m=+0.111834407 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 08:44:09 compute-0 nova_compute[189268]: 2025-11-22 08:44:09.843 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:44:09.985 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:44:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:44:09.986 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:44:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:44:09.986 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:44:11 compute-0 podman[249110]: 2025-11-22 08:44:11.146684637 +0000 UTC m=+0.102747955 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64)
Nov 22 08:44:11 compute-0 podman[249111]: 2025-11-22 08:44:11.190126669 +0000 UTC m=+0.133900101 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 22 08:44:12 compute-0 nova_compute[189268]: 2025-11-22 08:44:12.161 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:14 compute-0 nova_compute[189268]: 2025-11-22 08:44:14.847 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:17 compute-0 podman[249156]: 2025-11-22 08:44:17.145935945 +0000 UTC m=+0.096082639 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64)
Nov 22 08:44:17 compute-0 nova_compute[189268]: 2025-11-22 08:44:17.163 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:19 compute-0 podman[249175]: 2025-11-22 08:44:19.095000537 +0000 UTC m=+0.056968042 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:44:19 compute-0 nova_compute[189268]: 2025-11-22 08:44:19.850 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.094 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.094 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.094 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb7b7067e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.099 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:44:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:44:22 compute-0 nova_compute[189268]: 2025-11-22 08:44:22.166 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:24 compute-0 nova_compute[189268]: 2025-11-22 08:44:24.855 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:27 compute-0 nova_compute[189268]: 2025-11-22 08:44:27.167 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:28 compute-0 podman[249202]: 2025-11-22 08:44:28.120257243 +0000 UTC m=+0.063954497 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:44:28 compute-0 podman[249200]: 2025-11-22 08:44:28.146085018 +0000 UTC m=+0.105536119 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:44:28 compute-0 podman[249201]: 2025-11-22 08:44:28.147763433 +0000 UTC m=+0.100043524 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:44:28 compute-0 nova_compute[189268]: 2025-11-22 08:44:28.404 189273 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 5.95 sec
Nov 22 08:44:29 compute-0 podman[203476]: time="2025-11-22T08:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:44:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:44:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 22 08:44:29 compute-0 nova_compute[189268]: 2025-11-22 08:44:29.857 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:31 compute-0 openstack_network_exporter[205661]: ERROR   08:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:44:31 compute-0 openstack_network_exporter[205661]: ERROR   08:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:44:31 compute-0 openstack_network_exporter[205661]: ERROR   08:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:44:31 compute-0 openstack_network_exporter[205661]: ERROR   08:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:44:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:44:31 compute-0 openstack_network_exporter[205661]: ERROR   08:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:44:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:44:32 compute-0 nova_compute[189268]: 2025-11-22 08:44:32.170 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:34 compute-0 nova_compute[189268]: 2025-11-22 08:44:34.601 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:34 compute-0 nova_compute[189268]: 2025-11-22 08:44:34.860 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:36 compute-0 nova_compute[189268]: 2025-11-22 08:44:36.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:36 compute-0 nova_compute[189268]: 2025-11-22 08:44:36.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:44:37 compute-0 nova_compute[189268]: 2025-11-22 08:44:37.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:37 compute-0 nova_compute[189268]: 2025-11-22 08:44:37.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:44:37 compute-0 nova_compute[189268]: 2025-11-22 08:44:37.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:44:37 compute-0 nova_compute[189268]: 2025-11-22 08:44:37.112 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:44:37 compute-0 nova_compute[189268]: 2025-11-22 08:44:37.171 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:38 compute-0 nova_compute[189268]: 2025-11-22 08:44:38.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:38 compute-0 nova_compute[189268]: 2025-11-22 08:44:38.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:38 compute-0 podman[249262]: 2025-11-22 08:44:38.121358724 +0000 UTC m=+0.078178104 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Nov 22 08:44:38 compute-0 podman[249261]: 2025-11-22 08:44:38.122379521 +0000 UTC m=+0.077962848 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:44:39 compute-0 nova_compute[189268]: 2025-11-22 08:44:39.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:39 compute-0 nova_compute[189268]: 2025-11-22 08:44:39.864 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:42 compute-0 podman[249300]: 2025-11-22 08:44:42.138143013 +0000 UTC m=+0.087315367 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Nov 22 08:44:42 compute-0 podman[249301]: 2025-11-22 08:44:42.156356826 +0000 UTC m=+0.101950784 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:44:42 compute-0 nova_compute[189268]: 2025-11-22 08:44:42.173 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:44 compute-0 nova_compute[189268]: 2025-11-22 08:44:44.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:44 compute-0 nova_compute[189268]: 2025-11-22 08:44:44.866 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:47 compute-0 nova_compute[189268]: 2025-11-22 08:44:47.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:47 compute-0 nova_compute[189268]: 2025-11-22 08:44:47.177 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:48 compute-0 podman[249343]: 2025-11-22 08:44:48.146846471 +0000 UTC m=+0.096909520 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 22 08:44:49 compute-0 nova_compute[189268]: 2025-11-22 08:44:49.869 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:50 compute-0 podman[249364]: 2025-11-22 08:44:50.107136771 +0000 UTC m=+0.068149798 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.103 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.128 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.129 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.551 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.554 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5394MB free_disk=72.49931335449219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.554 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.555 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.652 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.653 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.688 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.706 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.709 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:44:51 compute-0 nova_compute[189268]: 2025-11-22 08:44:51.709 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:44:52 compute-0 nova_compute[189268]: 2025-11-22 08:44:52.180 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:54 compute-0 nova_compute[189268]: 2025-11-22 08:44:54.875 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:57 compute-0 nova_compute[189268]: 2025-11-22 08:44:57.185 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:44:59 compute-0 podman[249389]: 2025-11-22 08:44:59.100944734 +0000 UTC m=+0.054063055 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:44:59 compute-0 podman[249388]: 2025-11-22 08:44:59.11705433 +0000 UTC m=+0.073457158 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:44:59 compute-0 podman[249390]: 2025-11-22 08:44:59.13737042 +0000 UTC m=+0.075979996 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:44:59 compute-0 podman[203476]: time="2025-11-22T08:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:44:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:44:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Nov 22 08:44:59 compute-0 nova_compute[189268]: 2025-11-22 08:44:59.879 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:01 compute-0 openstack_network_exporter[205661]: ERROR   08:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:45:01 compute-0 openstack_network_exporter[205661]: ERROR   08:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:45:01 compute-0 openstack_network_exporter[205661]: ERROR   08:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:45:01 compute-0 openstack_network_exporter[205661]: ERROR   08:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:45:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:45:01 compute-0 openstack_network_exporter[205661]: ERROR   08:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:45:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:45:02 compute-0 nova_compute[189268]: 2025-11-22 08:45:02.186 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:04 compute-0 nova_compute[189268]: 2025-11-22 08:45:04.882 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:07 compute-0 nova_compute[189268]: 2025-11-22 08:45:07.188 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:09 compute-0 podman[249454]: 2025-11-22 08:45:09.116648482 +0000 UTC m=+0.067631214 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 08:45:09 compute-0 podman[249453]: 2025-11-22 08:45:09.12261663 +0000 UTC m=+0.078268376 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:45:09 compute-0 nova_compute[189268]: 2025-11-22 08:45:09.885 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:45:09.987 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:45:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:45:09.987 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:45:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:45:09.987 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:45:12 compute-0 nova_compute[189268]: 2025-11-22 08:45:12.192 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:13 compute-0 podman[249491]: 2025-11-22 08:45:13.173558367 +0000 UTC m=+0.112957277 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, release=1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 22 08:45:13 compute-0 podman[249492]: 2025-11-22 08:45:13.217671936 +0000 UTC m=+0.159550452 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 22 08:45:14 compute-0 nova_compute[189268]: 2025-11-22 08:45:14.887 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:17 compute-0 nova_compute[189268]: 2025-11-22 08:45:17.194 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:17 compute-0 nova_compute[189268]: 2025-11-22 08:45:17.368 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:19 compute-0 podman[249536]: 2025-11-22 08:45:19.128287695 +0000 UTC m=+0.080909597 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 22 08:45:19 compute-0 nova_compute[189268]: 2025-11-22 08:45:19.890 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:21 compute-0 podman[249559]: 2025-11-22 08:45:21.133782602 +0000 UTC m=+0.092789241 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:45:22 compute-0 nova_compute[189268]: 2025-11-22 08:45:22.197 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:24 compute-0 nova_compute[189268]: 2025-11-22 08:45:24.893 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:27 compute-0 nova_compute[189268]: 2025-11-22 08:45:27.200 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:29 compute-0 podman[203476]: time="2025-11-22T08:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:45:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:45:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 22 08:45:29 compute-0 nova_compute[189268]: 2025-11-22 08:45:29.897 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:30 compute-0 podman[249584]: 2025-11-22 08:45:30.109187045 +0000 UTC m=+0.057104085 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:45:30 compute-0 podman[249583]: 2025-11-22 08:45:30.116724085 +0000 UTC m=+0.068349623 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:45:30 compute-0 podman[249585]: 2025-11-22 08:45:30.140134425 +0000 UTC m=+0.082566740 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:45:31 compute-0 openstack_network_exporter[205661]: ERROR   08:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:45:31 compute-0 openstack_network_exporter[205661]: ERROR   08:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:45:31 compute-0 openstack_network_exporter[205661]: ERROR   08:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:45:31 compute-0 openstack_network_exporter[205661]: ERROR   08:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:45:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:45:31 compute-0 openstack_network_exporter[205661]: ERROR   08:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:45:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:45:32 compute-0 nova_compute[189268]: 2025-11-22 08:45:32.202 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:34 compute-0 nova_compute[189268]: 2025-11-22 08:45:34.900 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:35 compute-0 nova_compute[189268]: 2025-11-22 08:45:35.102 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:36 compute-0 nova_compute[189268]: 2025-11-22 08:45:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:36 compute-0 nova_compute[189268]: 2025-11-22 08:45:36.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:45:37 compute-0 nova_compute[189268]: 2025-11-22 08:45:37.206 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:38 compute-0 nova_compute[189268]: 2025-11-22 08:45:38.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:38 compute-0 nova_compute[189268]: 2025-11-22 08:45:38.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:38 compute-0 nova_compute[189268]: 2025-11-22 08:45:38.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:45:38 compute-0 nova_compute[189268]: 2025-11-22 08:45:38.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:45:38 compute-0 nova_compute[189268]: 2025-11-22 08:45:38.116 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:45:39 compute-0 nova_compute[189268]: 2025-11-22 08:45:39.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:39 compute-0 nova_compute[189268]: 2025-11-22 08:45:39.903 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:40 compute-0 nova_compute[189268]: 2025-11-22 08:45:40.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:40 compute-0 podman[249645]: 2025-11-22 08:45:40.125730665 +0000 UTC m=+0.076778977 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 08:45:40 compute-0 podman[249644]: 2025-11-22 08:45:40.146036064 +0000 UTC m=+0.101314278 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Nov 22 08:45:42 compute-0 nova_compute[189268]: 2025-11-22 08:45:42.209 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:44 compute-0 nova_compute[189268]: 2025-11-22 08:45:44.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:44 compute-0 podman[249681]: 2025-11-22 08:45:44.121722384 +0000 UTC m=+0.075359029 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0)
Nov 22 08:45:44 compute-0 podman[249682]: 2025-11-22 08:45:44.163133543 +0000 UTC m=+0.115738080 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:45:44 compute-0 nova_compute[189268]: 2025-11-22 08:45:44.906 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:47 compute-0 nova_compute[189268]: 2025-11-22 08:45:47.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:47 compute-0 nova_compute[189268]: 2025-11-22 08:45:47.211 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:49 compute-0 nova_compute[189268]: 2025-11-22 08:45:49.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:49 compute-0 nova_compute[189268]: 2025-11-22 08:45:49.910 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:50 compute-0 podman[249724]: 2025-11-22 08:45:50.107009541 +0000 UTC m=+0.066329279 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter)
Nov 22 08:45:50 compute-0 nova_compute[189268]: 2025-11-22 08:45:50.145 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:50 compute-0 nova_compute[189268]: 2025-11-22 08:45:50.145 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:45:50 compute-0 nova_compute[189268]: 2025-11-22 08:45:50.163 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:45:50 compute-0 nova_compute[189268]: 2025-11-22 08:45:50.163 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:50 compute-0 nova_compute[189268]: 2025-11-22 08:45:50.164 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:45:51 compute-0 nova_compute[189268]: 2025-11-22 08:45:51.225 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.127 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:45:52 compute-0 podman[249744]: 2025-11-22 08:45:52.153674072 +0000 UTC m=+0.098044791 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.214 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.492 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.493 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5385MB free_disk=72.4992904663086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.493 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.494 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.774 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.776 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:45:52 compute-0 nova_compute[189268]: 2025-11-22 08:45:52.895 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.003 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.004 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.021 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.051 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.076 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.098 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.100 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:45:53 compute-0 nova_compute[189268]: 2025-11-22 08:45:53.101 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:45:54 compute-0 nova_compute[189268]: 2025-11-22 08:45:54.913 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:57 compute-0 nova_compute[189268]: 2025-11-22 08:45:57.217 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:45:59 compute-0 podman[203476]: time="2025-11-22T08:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:45:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:45:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Nov 22 08:45:59 compute-0 nova_compute[189268]: 2025-11-22 08:45:59.916 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:01 compute-0 podman[249766]: 2025-11-22 08:46:01.130829591 +0000 UTC m=+0.079270402 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:46:01 compute-0 podman[249768]: 2025-11-22 08:46:01.139418639 +0000 UTC m=+0.077127776 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:46:01 compute-0 podman[249767]: 2025-11-22 08:46:01.157501698 +0000 UTC m=+0.093483819 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:46:01 compute-0 anacron[50953]: Job `cron.monthly' started
Nov 22 08:46:01 compute-0 anacron[50953]: Job `cron.monthly' terminated
Nov 22 08:46:01 compute-0 anacron[50953]: Normal exit (3 jobs run)
Nov 22 08:46:01 compute-0 openstack_network_exporter[205661]: ERROR   08:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:46:01 compute-0 openstack_network_exporter[205661]: ERROR   08:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:46:01 compute-0 openstack_network_exporter[205661]: ERROR   08:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:46:01 compute-0 openstack_network_exporter[205661]: ERROR   08:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:46:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:46:01 compute-0 openstack_network_exporter[205661]: ERROR   08:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:46:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:46:02 compute-0 nova_compute[189268]: 2025-11-22 08:46:02.222 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:04 compute-0 nova_compute[189268]: 2025-11-22 08:46:04.918 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:06 compute-0 nova_compute[189268]: 2025-11-22 08:46:06.355 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:07 compute-0 nova_compute[189268]: 2025-11-22 08:46:07.225 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:09 compute-0 nova_compute[189268]: 2025-11-22 08:46:09.921 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:46:09.987 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:46:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:46:09.988 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:46:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:46:09.988 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:46:11 compute-0 podman[249825]: 2025-11-22 08:46:11.137654073 +0000 UTC m=+0.085143519 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm)
Nov 22 08:46:11 compute-0 podman[249826]: 2025-11-22 08:46:11.156865883 +0000 UTC m=+0.089490644 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 08:46:12 compute-0 nova_compute[189268]: 2025-11-22 08:46:12.227 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:14 compute-0 podman[249864]: 2025-11-22 08:46:14.797110849 +0000 UTC m=+0.114424475 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543)
Nov 22 08:46:14 compute-0 podman[249865]: 2025-11-22 08:46:14.826335194 +0000 UTC m=+0.130402558 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:46:14 compute-0 nova_compute[189268]: 2025-11-22 08:46:14.925 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:17 compute-0 nova_compute[189268]: 2025-11-22 08:46:17.230 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:19 compute-0 nova_compute[189268]: 2025-11-22 08:46:19.928 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:20 compute-0 sshd-session[249909]: Invalid user hadoop from 80.94.92.164 port 34356
Nov 22 08:46:20 compute-0 podman[249911]: 2025-11-22 08:46:20.739674673 +0000 UTC m=+0.070614333 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6)
Nov 22 08:46:21 compute-0 sshd-session[249909]: Connection closed by invalid user hadoop 80.94.92.164 port 34356 [preauth]
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.094 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.095 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.097 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.101 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.104 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.106 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.107 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.108 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:46:22.109 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:46:22 compute-0 nova_compute[189268]: 2025-11-22 08:46:22.232 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:23 compute-0 podman[249931]: 2025-11-22 08:46:23.144308885 +0000 UTC m=+0.093590713 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:46:24 compute-0 nova_compute[189268]: 2025-11-22 08:46:24.932 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:24 compute-0 nova_compute[189268]: 2025-11-22 08:46:24.993 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:46:24.995 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:46:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:46:24.997 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:46:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:46:24.998 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:46:27 compute-0 nova_compute[189268]: 2025-11-22 08:46:27.236 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:29 compute-0 podman[203476]: time="2025-11-22T08:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:46:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:46:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Nov 22 08:46:29 compute-0 nova_compute[189268]: 2025-11-22 08:46:29.935 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:31 compute-0 openstack_network_exporter[205661]: ERROR   08:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:46:31 compute-0 openstack_network_exporter[205661]: ERROR   08:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:46:31 compute-0 openstack_network_exporter[205661]: ERROR   08:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:46:31 compute-0 openstack_network_exporter[205661]: ERROR   08:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:46:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:46:31 compute-0 openstack_network_exporter[205661]: ERROR   08:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:46:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:46:32 compute-0 podman[249955]: 2025-11-22 08:46:32.128338737 +0000 UTC m=+0.079505860 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:46:32 compute-0 podman[249962]: 2025-11-22 08:46:32.143117799 +0000 UTC m=+0.070977863 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 08:46:32 compute-0 podman[249956]: 2025-11-22 08:46:32.16579641 +0000 UTC m=+0.104196534 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:46:32 compute-0 nova_compute[189268]: 2025-11-22 08:46:32.239 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:34 compute-0 nova_compute[189268]: 2025-11-22 08:46:34.939 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:35 compute-0 nova_compute[189268]: 2025-11-22 08:46:35.117 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:36 compute-0 nova_compute[189268]: 2025-11-22 08:46:36.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:36 compute-0 nova_compute[189268]: 2025-11-22 08:46:36.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:46:37 compute-0 nova_compute[189268]: 2025-11-22 08:46:37.241 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:38 compute-0 nova_compute[189268]: 2025-11-22 08:46:38.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:38 compute-0 nova_compute[189268]: 2025-11-22 08:46:38.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:38 compute-0 nova_compute[189268]: 2025-11-22 08:46:38.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:46:38 compute-0 nova_compute[189268]: 2025-11-22 08:46:38.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:46:38 compute-0 nova_compute[189268]: 2025-11-22 08:46:38.111 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:46:39 compute-0 nova_compute[189268]: 2025-11-22 08:46:39.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:39 compute-0 nova_compute[189268]: 2025-11-22 08:46:39.942 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:41 compute-0 nova_compute[189268]: 2025-11-22 08:46:41.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:42 compute-0 podman[250014]: 2025-11-22 08:46:42.127654999 +0000 UTC m=+0.071350043 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:46:42 compute-0 podman[250013]: 2025-11-22 08:46:42.160744665 +0000 UTC m=+0.106605717 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 22 08:46:42 compute-0 nova_compute[189268]: 2025-11-22 08:46:42.247 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:44 compute-0 nova_compute[189268]: 2025-11-22 08:46:44.945 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:45 compute-0 nova_compute[189268]: 2025-11-22 08:46:45.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:45 compute-0 podman[250053]: 2025-11-22 08:46:45.133522573 +0000 UTC m=+0.088674623 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, release-0.7.12=, maintainer=Red Hat, Inc., container_name=kepler, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container)
Nov 22 08:46:45 compute-0 podman[250054]: 2025-11-22 08:46:45.170523743 +0000 UTC m=+0.118377900 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 08:46:47 compute-0 nova_compute[189268]: 2025-11-22 08:46:47.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:47 compute-0 nova_compute[189268]: 2025-11-22 08:46:47.249 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:49 compute-0 nova_compute[189268]: 2025-11-22 08:46:49.950 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:51 compute-0 podman[250100]: 2025-11-22 08:46:51.119023455 +0000 UTC m=+0.076049527 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 22 08:46:52 compute-0 nova_compute[189268]: 2025-11-22 08:46:52.254 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.127 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.515 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.516 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5391MB free_disk=72.49930953979492GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.516 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.517 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.575 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.576 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.602 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.615 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.617 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:46:53 compute-0 nova_compute[189268]: 2025-11-22 08:46:53.617 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:46:54 compute-0 podman[250121]: 2025-11-22 08:46:54.1045505 +0000 UTC m=+0.060471464 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:46:54 compute-0 nova_compute[189268]: 2025-11-22 08:46:54.955 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:55 compute-0 ovn_controller[97783]: 2025-11-22T08:46:55Z|00072|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Nov 22 08:46:57 compute-0 nova_compute[189268]: 2025-11-22 08:46:57.256 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:46:59 compute-0 podman[203476]: time="2025-11-22T08:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:46:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:46:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4333 "" "Go-http-client/1.1"
Nov 22 08:46:59 compute-0 nova_compute[189268]: 2025-11-22 08:46:59.958 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:01 compute-0 openstack_network_exporter[205661]: ERROR   08:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:47:01 compute-0 openstack_network_exporter[205661]: ERROR   08:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:47:01 compute-0 openstack_network_exporter[205661]: ERROR   08:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:47:01 compute-0 openstack_network_exporter[205661]: ERROR   08:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:47:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:47:01 compute-0 openstack_network_exporter[205661]: ERROR   08:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:47:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:47:02 compute-0 nova_compute[189268]: 2025-11-22 08:47:02.258 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:03 compute-0 podman[250147]: 2025-11-22 08:47:03.369171921 +0000 UTC m=+0.072289638 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:47:03 compute-0 podman[250145]: 2025-11-22 08:47:03.381635002 +0000 UTC m=+0.087404809 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 22 08:47:03 compute-0 podman[250146]: 2025-11-22 08:47:03.391685768 +0000 UTC m=+0.101428701 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:47:04 compute-0 nova_compute[189268]: 2025-11-22 08:47:04.961 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:07 compute-0 nova_compute[189268]: 2025-11-22 08:47:07.261 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:09 compute-0 nova_compute[189268]: 2025-11-22 08:47:09.966 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:47:09.988 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:47:09.989 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:47:09.989 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:12 compute-0 nova_compute[189268]: 2025-11-22 08:47:12.265 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:13 compute-0 podman[250204]: 2025-11-22 08:47:13.118820714 +0000 UTC m=+0.069841163 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 08:47:13 compute-0 podman[250203]: 2025-11-22 08:47:13.120600341 +0000 UTC m=+0.076486668 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:47:14 compute-0 nova_compute[189268]: 2025-11-22 08:47:14.969 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:15 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:47:15.759 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:47:15 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:47:15.761 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:47:15 compute-0 nova_compute[189268]: 2025-11-22 08:47:15.762 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:16 compute-0 podman[250241]: 2025-11-22 08:47:16.121734411 +0000 UTC m=+0.077668002 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release-0.7.12=, vcs-type=git, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, managed_by=edpm_ansible)
Nov 22 08:47:16 compute-0 podman[250242]: 2025-11-22 08:47:16.147944985 +0000 UTC m=+0.100713482 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 08:47:17 compute-0 nova_compute[189268]: 2025-11-22 08:47:17.266 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:47:17.763 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:47:19 compute-0 nova_compute[189268]: 2025-11-22 08:47:19.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:21 compute-0 nova_compute[189268]: 2025-11-22 08:47:21.274 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:21 compute-0 nova_compute[189268]: 2025-11-22 08:47:21.826 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:22 compute-0 podman[250285]: 2025-11-22 08:47:22.226739182 +0000 UTC m=+0.184917204 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Nov 22 08:47:22 compute-0 nova_compute[189268]: 2025-11-22 08:47:22.268 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:23 compute-0 nova_compute[189268]: 2025-11-22 08:47:23.547 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:24 compute-0 nova_compute[189268]: 2025-11-22 08:47:24.232 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:24 compute-0 nova_compute[189268]: 2025-11-22 08:47:24.975 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:25 compute-0 nova_compute[189268]: 2025-11-22 08:47:25.081 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:25 compute-0 podman[250305]: 2025-11-22 08:47:25.146184705 +0000 UTC m=+0.097635980 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:47:27 compute-0 nova_compute[189268]: 2025-11-22 08:47:27.271 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:29 compute-0 podman[203476]: time="2025-11-22T08:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:47:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:47:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Nov 22 08:47:29 compute-0 nova_compute[189268]: 2025-11-22 08:47:29.977 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:31 compute-0 openstack_network_exporter[205661]: ERROR   08:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:47:31 compute-0 openstack_network_exporter[205661]: ERROR   08:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:47:31 compute-0 openstack_network_exporter[205661]: ERROR   08:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:47:31 compute-0 openstack_network_exporter[205661]: ERROR   08:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:47:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:47:31 compute-0 openstack_network_exporter[205661]: ERROR   08:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:47:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:47:32 compute-0 nova_compute[189268]: 2025-11-22 08:47:32.273 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:34 compute-0 podman[250329]: 2025-11-22 08:47:34.11401534 +0000 UTC m=+0.073628214 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:47:34 compute-0 podman[250331]: 2025-11-22 08:47:34.11704045 +0000 UTC m=+0.065926649 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 08:47:34 compute-0 podman[250330]: 2025-11-22 08:47:34.153070235 +0000 UTC m=+0.106906435 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:47:34 compute-0 nova_compute[189268]: 2025-11-22 08:47:34.852 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:34 compute-0 nova_compute[189268]: 2025-11-22 08:47:34.979 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:35 compute-0 nova_compute[189268]: 2025-11-22 08:47:35.390 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:35 compute-0 nova_compute[189268]: 2025-11-22 08:47:35.618 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:35 compute-0 nova_compute[189268]: 2025-11-22 08:47:35.960 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:37 compute-0 nova_compute[189268]: 2025-11-22 08:47:37.275 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:38 compute-0 nova_compute[189268]: 2025-11-22 08:47:38.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:38 compute-0 nova_compute[189268]: 2025-11-22 08:47:38.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:47:38 compute-0 nova_compute[189268]: 2025-11-22 08:47:38.757 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:39 compute-0 nova_compute[189268]: 2025-11-22 08:47:39.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:39 compute-0 nova_compute[189268]: 2025-11-22 08:47:39.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:39 compute-0 nova_compute[189268]: 2025-11-22 08:47:39.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:47:39 compute-0 nova_compute[189268]: 2025-11-22 08:47:39.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:47:39 compute-0 nova_compute[189268]: 2025-11-22 08:47:39.111 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:47:39 compute-0 nova_compute[189268]: 2025-11-22 08:47:39.981 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:40 compute-0 nova_compute[189268]: 2025-11-22 08:47:40.857 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:41 compute-0 nova_compute[189268]: 2025-11-22 08:47:41.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:42 compute-0 nova_compute[189268]: 2025-11-22 08:47:42.001 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:42 compute-0 nova_compute[189268]: 2025-11-22 08:47:42.279 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:43 compute-0 nova_compute[189268]: 2025-11-22 08:47:43.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:44 compute-0 podman[250386]: 2025-11-22 08:47:44.117759631 +0000 UTC m=+0.071194259 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 22 08:47:44 compute-0 podman[250385]: 2025-11-22 08:47:44.142824266 +0000 UTC m=+0.100921467 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:47:44 compute-0 nova_compute[189268]: 2025-11-22 08:47:44.985 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:45 compute-0 nova_compute[189268]: 2025-11-22 08:47:45.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:47 compute-0 podman[250422]: 2025-11-22 08:47:47.15583878 +0000 UTC m=+0.100436915 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.buildah.version=1.29.0)
Nov 22 08:47:47 compute-0 podman[250423]: 2025-11-22 08:47:47.208175107 +0000 UTC m=+0.156878101 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:47:47 compute-0 nova_compute[189268]: 2025-11-22 08:47:47.280 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:48 compute-0 nova_compute[189268]: 2025-11-22 08:47:48.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:49 compute-0 nova_compute[189268]: 2025-11-22 08:47:49.989 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:52 compute-0 nova_compute[189268]: 2025-11-22 08:47:52.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:52 compute-0 nova_compute[189268]: 2025-11-22 08:47:52.283 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:52 compute-0 nova_compute[189268]: 2025-11-22 08:47:52.357 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:53 compute-0 podman[250463]: 2025-11-22 08:47:53.15478344 +0000 UTC m=+0.103307131 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc.)
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.314 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.315 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.316 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.316 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.655 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.656 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5405MB free_disk=72.49936294555664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.657 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.657 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:54 compute-0 nova_compute[189268]: 2025-11-22 08:47:54.993 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.576 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance a04b24d5-3478-4e5f-bb51-abf299fa4459 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.576 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.576 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.634 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.644 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.645 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.646 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.998 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:55 compute-0 nova_compute[189268]: 2025-11-22 08:47:55.999 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.051 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:47:56 compute-0 podman[250483]: 2025-11-22 08:47:56.117368037 +0000 UTC m=+0.067682815 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.238 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.239 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.249 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.250 189273 INFO nova.compute.claims [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.501 189273 DEBUG nova.compute.provider_tree [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.519 189273 DEBUG nova.scheduler.client.report [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.607 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.608 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.725 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.725 189273 DEBUG nova.network.neutron [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.782 189273 INFO nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.799 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.799 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.821 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:47:56 compute-0 nova_compute[189268]: 2025-11-22 08:47:56.846 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.217 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.217 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.224 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.225 189273 INFO nova.compute.claims [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.259 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.260 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.260 189273 INFO nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Creating image(s)
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.261 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "/var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.261 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "/var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.262 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "/var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.262 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.263 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.276 189273 DEBUG nova.policy [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5fe0ae1f27fc4a9ea04dde879cc50cba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21dde3ab59bc4d5c890712c19e1b5ec8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.285 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.305 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.306 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.437 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.479 189273 DEBUG nova.compute.provider_tree [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.494 189273 DEBUG nova.scheduler.client.report [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.670 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.671 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.730 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.730 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.739 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.740 189273 INFO nova.compute.claims [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.784 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.785 189273 DEBUG nova.network.neutron [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.875 189273 INFO nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.933 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.967 189273 DEBUG nova.compute.provider_tree [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:47:57 compute-0 nova_compute[189268]: 2025-11-22 08:47:57.985 189273 DEBUG nova.scheduler.client.report [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.331 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.333 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.579 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.581 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.581 189273 INFO nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Creating image(s)
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.582 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.582 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.583 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.584 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.937 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:47:58 compute-0 nova_compute[189268]: 2025-11-22 08:47:58.938 189273 DEBUG nova.network.neutron [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.101 189273 INFO nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.250 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:47:59 compute-0 podman[203476]: time="2025-11-22T08:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:47:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:47:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4343 "" "Go-http-client/1.1"
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.945 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.946 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.947 189273 INFO nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Creating image(s)
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.947 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "/var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.948 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "/var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.948 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "/var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.949 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:47:59 compute-0 nova_compute[189268]: 2025-11-22 08:47:59.995 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:00 compute-0 nova_compute[189268]: 2025-11-22 08:48:00.263 189273 DEBUG nova.policy [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16843c91d66144f880a31734be4d3dee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8de05c82cd5c4f7bbe156c45495011c2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:48:00 compute-0 nova_compute[189268]: 2025-11-22 08:48:00.306 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:00 compute-0 nova_compute[189268]: 2025-11-22 08:48:00.327 189273 DEBUG nova.policy [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd19b7a27c3e74d08af788a67b85247fc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3503f7b171c4187acaf1d66e260df45', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:48:00 compute-0 nova_compute[189268]: 2025-11-22 08:48:00.372 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.part --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:00 compute-0 nova_compute[189268]: 2025-11-22 08:48:00.373 189273 DEBUG nova.virt.images [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 22 08:48:00 compute-0 nova_compute[189268]: 2025-11-22 08:48:00.388 189273 DEBUG nova.privsep.utils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 08:48:00 compute-0 nova_compute[189268]: 2025-11-22 08:48:00.389 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.part /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:01 compute-0 openstack_network_exporter[205661]: ERROR   08:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:48:01 compute-0 openstack_network_exporter[205661]: ERROR   08:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:48:01 compute-0 openstack_network_exporter[205661]: ERROR   08:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:48:01 compute-0 openstack_network_exporter[205661]: ERROR   08:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:48:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:48:01 compute-0 openstack_network_exporter[205661]: ERROR   08:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:48:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.820 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.part /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.converted" returned: 0 in 1.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.825 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.888 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad.converted --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.889 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.904 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 3.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.904 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.918 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.918 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.933 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.949 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.967 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.992 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.993 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:01 compute-0 nova_compute[189268]: 2025-11-22 08:48:01.994 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.006 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.023 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.024 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.030 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.030 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.047 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.048 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.063 189273 DEBUG nova.network.neutron [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Successfully created port: fbd5a3ad-e519-4a3f-ab67-99a00166bd4c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.067 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.068 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.087 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.138 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk 1073741824" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.140 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.141 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.158 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.174 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.206 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.208 189273 DEBUG nova.virt.disk.api [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Checking if we can resize image /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.208 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.227 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.227 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.235 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.236 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.254 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.255 189273 INFO nova.compute.claims [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.270 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.271 189273 DEBUG nova.virt.disk.api [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Cannot resize image /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.271 189273 DEBUG nova.objects.instance [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lazy-loading 'migration_context' on Instance uuid a04b24d5-3478-4e5f-bb51-abf299fa4459 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.282 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.282 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Ensure instance console log exists: /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.283 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.284 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.284 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.287 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.391 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk 1073741824" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.392 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.392 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.419 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.434 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.460 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.461 189273 DEBUG nova.virt.disk.api [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Checking if we can resize image /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.462 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.497 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.498 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.521 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.522 189273 DEBUG nova.virt.disk.api [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Cannot resize image /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.522 189273 DEBUG nova.objects.instance [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.525 189273 DEBUG nova.compute.provider_tree [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.542 189273 DEBUG nova.scheduler.client.report [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.545 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.546 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Ensure instance console log exists: /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.546 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.547 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.547 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.624 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.625 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.709 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.710 189273 DEBUG nova.network.neutron [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.721 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk 1073741824" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.724 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.725 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.767 189273 INFO nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.789 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.789 189273 DEBUG nova.virt.disk.api [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Checking if we can resize image /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.790 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.807 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.857 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.858 189273 DEBUG nova.virt.disk.api [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Cannot resize image /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.858 189273 DEBUG nova.objects.instance [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lazy-loading 'migration_context' on Instance uuid 81db0af1-e2c6-4f76-a043-9d51b0431db0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.882 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.882 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Ensure instance console log exists: /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.883 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.883 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:02 compute-0 nova_compute[189268]: 2025-11-22 08:48:02.884 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.091 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.093 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.094 189273 INFO nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Creating image(s)
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.094 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "/var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.095 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "/var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.096 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "/var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.110 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.169 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.170 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.171 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.183 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.242 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.243 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.353 189273 DEBUG nova.policy [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd65f035f2b1b49319ad0f75cf17d724a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '545684c5a33d4873a3184e54d562685f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.548 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk 1073741824" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.549 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.549 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.612 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.613 189273 DEBUG nova.virt.disk.api [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Checking if we can resize image /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.615 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.673 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.675 189273 DEBUG nova.virt.disk.api [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Cannot resize image /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.675 189273 DEBUG nova.objects.instance [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lazy-loading 'migration_context' on Instance uuid 9f91d44e-f61c-44ca-b623-140121eb8965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.690 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.691 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Ensure instance console log exists: /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.692 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.692 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.693 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:03 compute-0 nova_compute[189268]: 2025-11-22 08:48:03.835 189273 DEBUG nova.network.neutron [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Successfully created port: 3f5ad619-9cef-49b4-b0fd-8243d3506e32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:48:04 compute-0 nova_compute[189268]: 2025-11-22 08:48:04.404 189273 DEBUG nova.network.neutron [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Successfully created port: 5646e04c-958a-4629-b420-730d4967f183 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:48:04 compute-0 nova_compute[189268]: 2025-11-22 08:48:04.569 189273 DEBUG nova.network.neutron [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Successfully updated port: fbd5a3ad-e519-4a3f-ab67-99a00166bd4c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:48:04 compute-0 nova_compute[189268]: 2025-11-22 08:48:04.634 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:04 compute-0 nova_compute[189268]: 2025-11-22 08:48:04.635 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquired lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:04 compute-0 nova_compute[189268]: 2025-11-22 08:48:04.635 189273 DEBUG nova.network.neutron [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:48:04 compute-0 nova_compute[189268]: 2025-11-22 08:48:04.840 189273 DEBUG nova.network.neutron [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:48:04 compute-0 nova_compute[189268]: 2025-11-22 08:48:04.998 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:05 compute-0 podman[250581]: 2025-11-22 08:48:05.110165212 +0000 UTC m=+0.065096158 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:48:05 compute-0 podman[250580]: 2025-11-22 08:48:05.117563228 +0000 UTC m=+0.075712349 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 22 08:48:05 compute-0 podman[250582]: 2025-11-22 08:48:05.12027759 +0000 UTC m=+0.071174859 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.108 189273 DEBUG nova.network.neutron [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Updating instance_info_cache with network_info: [{"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.290 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.326 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Releasing lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.327 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Instance network_info: |[{"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.330 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Start _get_guest_xml network_info=[{"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.338 189273 WARNING nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.345 189273 DEBUG nova.virt.libvirt.host [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.345 189273 DEBUG nova.virt.libvirt.host [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.350 189273 DEBUG nova.virt.libvirt.host [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.351 189273 DEBUG nova.virt.libvirt.host [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.351 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.351 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.352 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.353 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.353 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.353 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.354 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.354 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.354 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.355 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.355 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.356 189273 DEBUG nova.virt.hardware [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.360 189273 DEBUG nova.virt.libvirt.vif [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1220793961',display_name='tempest-ServersTestManualDisk-server-1220793961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1220793961',id=7,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMHSXyv5/0Hlx4i0OgKhjpEjPmuanREUsyUDnTJ7rFrTHFiHEnaLMIfwHDH01Ks8d9pDlbN2I8RDvKuUXlCzQJWqREG2cSupdPUUp/0yrCSVVH27nlxpF76AAlKTR9RoYA==',key_name='tempest-keypair-884669752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21dde3ab59bc4d5c890712c19e1b5ec8',ramdisk_id='',reservation_id='r-cfysm7ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1224175633',owner_user_name='tempest-ServersTestManualDisk-1224175633-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:47:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fe0ae1f27fc4a9ea04dde879cc50cba',uuid=a04b24d5-3478-4e5f-bb51-abf299fa4459,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.360 189273 DEBUG nova.network.os_vif_util [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Converting VIF {"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.361 189273 DEBUG nova.network.os_vif_util [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:b1:72,bridge_name='br-int',has_traffic_filtering=True,id=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c,network=Network(c1d6d43d-5b47-494d-a955-bb769150c95d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbd5a3ad-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.362 189273 DEBUG nova.objects.instance [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lazy-loading 'pci_devices' on Instance uuid a04b24d5-3478-4e5f-bb51-abf299fa4459 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.377 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <uuid>a04b24d5-3478-4e5f-bb51-abf299fa4459</uuid>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <name>instance-00000007</name>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <nova:name>tempest-ServersTestManualDisk-server-1220793961</nova:name>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:48:07</nova:creationTime>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:user uuid="5fe0ae1f27fc4a9ea04dde879cc50cba">tempest-ServersTestManualDisk-1224175633-project-member</nova:user>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:project uuid="21dde3ab59bc4d5c890712c19e1b5ec8">tempest-ServersTestManualDisk-1224175633</nova:project>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         <nova:port uuid="fbd5a3ad-e519-4a3f-ab67-99a00166bd4c">
Nov 22 08:48:07 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <system>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <entry name="serial">a04b24d5-3478-4e5f-bb51-abf299fa4459</entry>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <entry name="uuid">a04b24d5-3478-4e5f-bb51-abf299fa4459</entry>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </system>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <os>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   </os>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <features>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   </features>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.config"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:3c:b1:72"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <target dev="tapfbd5a3ad-e5"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/console.log" append="off"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <video>
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </video>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:48:07 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:48:07 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:48:07 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:48:07 compute-0 nova_compute[189268]: </domain>
Nov 22 08:48:07 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.379 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Preparing to wait for external event network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.379 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.380 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.380 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.381 189273 DEBUG nova.virt.libvirt.vif [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1220793961',display_name='tempest-ServersTestManualDisk-server-1220793961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1220793961',id=7,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMHSXyv5/0Hlx4i0OgKhjpEjPmuanREUsyUDnTJ7rFrTHFiHEnaLMIfwHDH01Ks8d9pDlbN2I8RDvKuUXlCzQJWqREG2cSupdPUUp/0yrCSVVH27nlxpF76AAlKTR9RoYA==',key_name='tempest-keypair-884669752',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21dde3ab59bc4d5c890712c19e1b5ec8',ramdisk_id='',reservation_id='r-cfysm7ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1224175633',owner_user_name='tempest-ServersTestManualDisk-1224175633-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:47:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fe0ae1f27fc4a9ea04dde879cc50cba',uuid=a04b24d5-3478-4e5f-bb51-abf299fa4459,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.381 189273 DEBUG nova.network.os_vif_util [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Converting VIF {"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.382 189273 DEBUG nova.network.os_vif_util [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:b1:72,bridge_name='br-int',has_traffic_filtering=True,id=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c,network=Network(c1d6d43d-5b47-494d-a955-bb769150c95d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbd5a3ad-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.382 189273 DEBUG os_vif [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:b1:72,bridge_name='br-int',has_traffic_filtering=True,id=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c,network=Network(c1d6d43d-5b47-494d-a955-bb769150c95d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbd5a3ad-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.383 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.383 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.383 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.387 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.387 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd5a3ad-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.388 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbd5a3ad-e5, col_values=(('external_ids', {'iface-id': 'fbd5a3ad-e519-4a3f-ab67-99a00166bd4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:b1:72', 'vm-uuid': 'a04b24d5-3478-4e5f-bb51-abf299fa4459'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.390 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.391 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:48:07 compute-0 NetworkManager[56326]: <info>  [1763801287.3922] manager: (tapfbd5a3ad-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.398 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.399 189273 INFO os_vif [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:b1:72,bridge_name='br-int',has_traffic_filtering=True,id=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c,network=Network(c1d6d43d-5b47-494d-a955-bb769150c95d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbd5a3ad-e5')
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.466 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.467 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.468 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] No VIF found with MAC fa:16:3e:3c:b1:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.469 189273 INFO nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Using config drive
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.583 189273 DEBUG nova.network.neutron [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Successfully created port: 363e6818-f5a5-4baa-87a9-7526c518ae95 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.607 189273 DEBUG nova.compute.manager [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-changed-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.608 189273 DEBUG nova.compute.manager [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Refreshing instance network info cache due to event network-changed-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.608 189273 DEBUG oslo_concurrency.lockutils [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.608 189273 DEBUG oslo_concurrency.lockutils [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:07 compute-0 nova_compute[189268]: 2025-11-22 08:48:07.609 189273 DEBUG nova.network.neutron [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Refreshing network info cache for port fbd5a3ad-e519-4a3f-ab67-99a00166bd4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:08 compute-0 nova_compute[189268]: 2025-11-22 08:48:08.546 189273 INFO nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Creating config drive at /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.config
Nov 22 08:48:08 compute-0 nova_compute[189268]: 2025-11-22 08:48:08.555 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9ysyu4d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:08 compute-0 nova_compute[189268]: 2025-11-22 08:48:08.687 189273 DEBUG oslo_concurrency.processutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp9ysyu4d" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:08 compute-0 kernel: tapfbd5a3ad-e5: entered promiscuous mode
Nov 22 08:48:08 compute-0 NetworkManager[56326]: <info>  [1763801288.7709] manager: (tapfbd5a3ad-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 22 08:48:08 compute-0 ovn_controller[97783]: 2025-11-22T08:48:08Z|00073|binding|INFO|Claiming lport fbd5a3ad-e519-4a3f-ab67-99a00166bd4c for this chassis.
Nov 22 08:48:08 compute-0 nova_compute[189268]: 2025-11-22 08:48:08.775 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:08 compute-0 ovn_controller[97783]: 2025-11-22T08:48:08Z|00074|binding|INFO|fbd5a3ad-e519-4a3f-ab67-99a00166bd4c: Claiming fa:16:3e:3c:b1:72 10.100.0.4
Nov 22 08:48:08 compute-0 ovn_controller[97783]: 2025-11-22T08:48:08Z|00075|binding|INFO|Setting lport fbd5a3ad-e519-4a3f-ab67-99a00166bd4c ovn-installed in OVS
Nov 22 08:48:08 compute-0 nova_compute[189268]: 2025-11-22 08:48:08.807 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:08 compute-0 nova_compute[189268]: 2025-11-22 08:48:08.809 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:08 compute-0 ovn_controller[97783]: 2025-11-22T08:48:08Z|00076|binding|INFO|Setting lport fbd5a3ad-e519-4a3f-ab67-99a00166bd4c up in Southbound
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.810 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:b1:72 10.100.0.4'], port_security=['fa:16:3e:3c:b1:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a04b24d5-3478-4e5f-bb51-abf299fa4459', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1d6d43d-5b47-494d-a955-bb769150c95d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21dde3ab59bc4d5c890712c19e1b5ec8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '025c2d69-95c4-4db4-b22f-bb23cfb7a649', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed50ec9b-74d3-4ca4-8425-6eb8a7e767c0, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.812 106642 INFO neutron.agent.ovn.metadata.agent [-] Port fbd5a3ad-e519-4a3f-ab67-99a00166bd4c in datapath c1d6d43d-5b47-494d-a955-bb769150c95d bound to our chassis
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.814 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1d6d43d-5b47-494d-a955-bb769150c95d
Nov 22 08:48:08 compute-0 systemd-udevd[250660]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:48:08 compute-0 systemd-machined[155703]: New machine qemu-7-instance-00000007.
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.831 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[daa6ef38-4679-4615-8a27-ce8ea92ebd58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.833 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1d6d43d-51 in ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.837 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1d6d43d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.838 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[781a744c-acba-4368-9b69-33eff0b7539d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.839 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0d6627-3098-4caa-ac02-6df56830318e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 22 08:48:08 compute-0 NetworkManager[56326]: <info>  [1763801288.8431] device (tapfbd5a3ad-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:48:08 compute-0 NetworkManager[56326]: <info>  [1763801288.8476] device (tapfbd5a3ad-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.854 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[694cc4f6-0a23-414f-b057-e3ba9cd00584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.884 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1b681d42-122d-4d53-93a9-4115a4869131]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.917 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[d6435b0f-e4b2-40fe-87f4-6a35b4d0a459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.926 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5024b824-6867-4b37-b3ab-77b2ade38819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 NetworkManager[56326]: <info>  [1763801288.9275] manager: (tapc1d6d43d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Nov 22 08:48:08 compute-0 systemd-udevd[250663]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.968 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4ffe7a-6070-4c51-adba-2b35764d16a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:08.971 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[8b021e64-7843-42db-8f29-0957d44b2af2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:08 compute-0 NetworkManager[56326]: <info>  [1763801288.9993] device (tapc1d6d43d-50): carrier: link connected
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.004 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[68cec1e5-8d7f-4630-8b44-1aca59faa239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.020 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[68228848-98a5-4249-aed9-821a72b242ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1d6d43d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:de:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640258, 'reachable_time': 40834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250693, 'error': None, 'target': 'ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.041 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae67b4e-023b-4037-8cd8-80145b592153]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:de40'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640258, 'tstamp': 640258}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250694, 'error': None, 'target': 'ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.060 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e1ecc4-25d4-4417-816c-807bf6aad60e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1d6d43d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:de:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640258, 'reachable_time': 40834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250695, 'error': None, 'target': 'ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.094 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0d690f-9443-46d0-b5bd-483aa49aba60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.167 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[47ac531e-297f-463f-90fe-a53b4691fe46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.170 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1d6d43d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.171 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.172 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1d6d43d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:09 compute-0 kernel: tapc1d6d43d-50: entered promiscuous mode
Nov 22 08:48:09 compute-0 NetworkManager[56326]: <info>  [1763801289.1754] manager: (tapc1d6d43d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.174 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.176 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.180 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1d6d43d-50, col_values=(('external_ids', {'iface-id': 'e5648d3a-b45e-4174-893e-759e2a51c414'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.182 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:09 compute-0 ovn_controller[97783]: 2025-11-22T08:48:09Z|00077|binding|INFO|Releasing lport e5648d3a-b45e-4174-893e-759e2a51c414 from this chassis (sb_readonly=0)
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.183 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.184 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1d6d43d-5b47-494d-a955-bb769150c95d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1d6d43d-5b47-494d-a955-bb769150c95d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.185 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[0625fe0b-130c-4d7b-b65e-5d5161127b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.186 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-c1d6d43d-5b47-494d-a955-bb769150c95d
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/c1d6d43d-5b47-494d-a955-bb769150c95d.pid.haproxy
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID c1d6d43d-5b47-494d-a955-bb769150c95d
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.187 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d', 'env', 'PROCESS_TAG=haproxy-c1d6d43d-5b47-494d-a955-bb769150c95d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1d6d43d-5b47-494d-a955-bb769150c95d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.195 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.276 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801289.2757597, a04b24d5-3478-4e5f-bb51-abf299fa4459 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.277 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] VM Started (Lifecycle Event)
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.293 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.299 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801289.2759478, a04b24d5-3478-4e5f-bb51-abf299fa4459 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.299 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] VM Paused (Lifecycle Event)
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.318 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.326 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:09 compute-0 nova_compute[189268]: 2025-11-22 08:48:09.349 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:09 compute-0 podman[250731]: 2025-11-22 08:48:09.630010239 +0000 UTC m=+0.027959863 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.990 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.991 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:09.991 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:11 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 08:48:11 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 08:48:11 compute-0 podman[250731]: 2025-11-22 08:48:11.559003611 +0000 UTC m=+1.956953225 container create b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:48:11 compute-0 systemd[1]: Started libpod-conmon-b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f.scope.
Nov 22 08:48:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257ae6d208d29ac6186cef476f519546c2e95e4e67f96056000ddacddf325dbd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:48:12 compute-0 nova_compute[189268]: 2025-11-22 08:48:12.296 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:12 compute-0 podman[250731]: 2025-11-22 08:48:12.340420999 +0000 UTC m=+2.738370633 container init b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 08:48:12 compute-0 podman[250731]: 2025-11-22 08:48:12.3489762 +0000 UTC m=+2.746925814 container start b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:48:12 compute-0 neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d[250765]: [NOTICE]   (250769) : New worker (250771) forked
Nov 22 08:48:12 compute-0 neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d[250765]: [NOTICE]   (250769) : Loading success.
Nov 22 08:48:12 compute-0 nova_compute[189268]: 2025-11-22 08:48:12.391 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:14 compute-0 podman[250781]: 2025-11-22 08:48:14.765254243 +0000 UTC m=+0.083161708 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3)
Nov 22 08:48:14 compute-0 podman[250780]: 2025-11-22 08:48:14.769070775 +0000 UTC m=+0.086344302 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:48:17 compute-0 nova_compute[189268]: 2025-11-22 08:48:17.299 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:17 compute-0 nova_compute[189268]: 2025-11-22 08:48:17.393 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:17 compute-0 nova_compute[189268]: 2025-11-22 08:48:17.971 189273 DEBUG nova.network.neutron [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Successfully updated port: 3f5ad619-9cef-49b4-b0fd-8243d3506e32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.005 189273 DEBUG nova.network.neutron [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Successfully updated port: 5646e04c-958a-4629-b420-730d4967f183 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.135 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.136 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquired lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.136 189273 DEBUG nova.network.neutron [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:48:18 compute-0 podman[250816]: 2025-11-22 08:48:18.137129176 +0000 UTC m=+0.095099747 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 22 08:48:18 compute-0 podman[250817]: 2025-11-22 08:48:18.146020465 +0000 UTC m=+0.102272730 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.186 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.187 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.187 189273 DEBUG nova.network.neutron [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:48:18 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:18.272 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.272 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:18 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:18.273 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.651 189273 DEBUG nova.network.neutron [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Updated VIF entry in instance network info cache for port fbd5a3ad-e519-4a3f-ab67-99a00166bd4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.651 189273 DEBUG nova.network.neutron [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Updating instance_info_cache with network_info: [{"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.672 189273 DEBUG oslo_concurrency.lockutils [req-087e4188-4b8d-4d7b-96de-993940a62cf4 req-0fe50792-c907-41fd-b340-64fcb153d164 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.751 189273 DEBUG nova.network.neutron [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:48:18 compute-0 nova_compute[189268]: 2025-11-22 08:48:18.756 189273 DEBUG nova.network.neutron [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:48:19 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:19.275 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:19 compute-0 nova_compute[189268]: 2025-11-22 08:48:19.461 189273 DEBUG nova.compute.manager [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-changed-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:19 compute-0 nova_compute[189268]: 2025-11-22 08:48:19.462 189273 DEBUG nova.compute.manager [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Refreshing instance network info cache due to event network-changed-3f5ad619-9cef-49b4-b0fd-8243d3506e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:19 compute-0 nova_compute[189268]: 2025-11-22 08:48:19.462 189273 DEBUG oslo_concurrency.lockutils [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:19 compute-0 nova_compute[189268]: 2025-11-22 08:48:19.928 189273 DEBUG nova.network.neutron [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Successfully updated port: 363e6818-f5a5-4baa-87a9-7526c518ae95 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:48:19 compute-0 nova_compute[189268]: 2025-11-22 08:48:19.961 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:19 compute-0 nova_compute[189268]: 2025-11-22 08:48:19.961 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquired lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:19 compute-0 nova_compute[189268]: 2025-11-22 08:48:19.962 189273 DEBUG nova.network.neutron [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.245 189273 DEBUG nova.network.neutron [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.613 189273 DEBUG nova.network.neutron [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updating instance_info_cache with network_info: [{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.658 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Releasing lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.659 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance network_info: |[{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.659 189273 DEBUG oslo_concurrency.lockutils [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.660 189273 DEBUG nova.network.neutron [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Refreshing network info cache for port 3f5ad619-9cef-49b4-b0fd-8243d3506e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.663 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Start _get_guest_xml network_info=[{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.669 189273 DEBUG nova.network.neutron [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.682 189273 WARNING nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.694 189273 DEBUG nova.virt.libvirt.host [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.695 189273 DEBUG nova.virt.libvirt.host [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.699 189273 DEBUG nova.virt.libvirt.host [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.700 189273 DEBUG nova.virt.libvirt.host [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.701 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.701 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.701 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.702 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.702 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.702 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.703 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.703 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.703 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.704 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.704 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.704 189273 DEBUG nova.virt.hardware [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.709 189273 DEBUG nova.virt.libvirt.vif [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1615837079',display_name='tempest-ServerActionsTestJSON-server-1615837079',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1615837079',id=8,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLdsHFflrgi7wGkvgkOXdCwC+kr9nW2mi1DXZmxLox1ZC0TuSJdcF2M8rMeuABQiSpoDl4gw87gDh3KsMHxzPzzF3d1/1OBKsUUK2YCN1YD+nS62FFKtRtMD4Bx9Y/yudw==',key_name='tempest-keypair-416169958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8de05c82cd5c4f7bbe156c45495011c2',ramdisk_id='',reservation_id='r-b52qwrco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-748326472',owner_user_name='tempest-ServerActionsTestJSON-748326472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:47:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16843c91d66144f880a31734be4d3dee',uuid=4414e066-bc1a-4a63-b3a0-5e88f0553032,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.710 189273 DEBUG nova.network.os_vif_util [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converting VIF {"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.711 189273 DEBUG nova.network.os_vif_util [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.712 189273 DEBUG nova.objects.instance [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.727 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.728 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Instance network_info: |[{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.729 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <uuid>4414e066-bc1a-4a63-b3a0-5e88f0553032</uuid>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <name>instance-00000008</name>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:name>tempest-ServerActionsTestJSON-server-1615837079</nova:name>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:48:20</nova:creationTime>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:user uuid="16843c91d66144f880a31734be4d3dee">tempest-ServerActionsTestJSON-748326472-project-member</nova:user>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:project uuid="8de05c82cd5c4f7bbe156c45495011c2">tempest-ServerActionsTestJSON-748326472</nova:project>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:port uuid="3f5ad619-9cef-49b4-b0fd-8243d3506e32">
Nov 22 08:48:20 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <system>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="serial">4414e066-bc1a-4a63-b3a0-5e88f0553032</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="uuid">4414e066-bc1a-4a63-b3a0-5e88f0553032</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </system>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <os>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </os>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <features>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </features>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.config"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:7a:63:17"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <target dev="tap3f5ad619-9c"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/console.log" append="off"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <video>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </video>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:48:20 compute-0 nova_compute[189268]: </domain>
Nov 22 08:48:20 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.730 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Preparing to wait for external event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.730 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.731 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.731 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.731 189273 DEBUG nova.virt.libvirt.vif [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1615837079',display_name='tempest-ServerActionsTestJSON-server-1615837079',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1615837079',id=8,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLdsHFflrgi7wGkvgkOXdCwC+kr9nW2mi1DXZmxLox1ZC0TuSJdcF2M8rMeuABQiSpoDl4gw87gDh3KsMHxzPzzF3d1/1OBKsUUK2YCN1YD+nS62FFKtRtMD4Bx9Y/yudw==',key_name='tempest-keypair-416169958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8de05c82cd5c4f7bbe156c45495011c2',ramdisk_id='',reservation_id='r-b52qwrco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-748326472',owner_user_name='tempest-ServerActionsTestJSON-748326472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:47:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16843c91d66144f880a31734be4d3dee',uuid=4414e066-bc1a-4a63-b3a0-5e88f0553032,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.732 189273 DEBUG nova.network.os_vif_util [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converting VIF {"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.732 189273 DEBUG nova.network.os_vif_util [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.732 189273 DEBUG os_vif [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.733 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.733 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.734 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.736 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Start _get_guest_xml network_info=[{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.739 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.739 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f5ad619-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.740 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f5ad619-9c, col_values=(('external_ids', {'iface-id': '3f5ad619-9cef-49b4-b0fd-8243d3506e32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:63:17', 'vm-uuid': '4414e066-bc1a-4a63-b3a0-5e88f0553032'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.742 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.743 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:48:20 compute-0 NetworkManager[56326]: <info>  [1763801300.7439] manager: (tap3f5ad619-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.747 189273 WARNING nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.753 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.755 189273 INFO os_vif [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c')
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.756 189273 DEBUG nova.virt.libvirt.host [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.757 189273 DEBUG nova.virt.libvirt.host [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.762 189273 DEBUG nova.virt.libvirt.host [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.763 189273 DEBUG nova.virt.libvirt.host [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.763 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.764 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.764 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.765 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.765 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.765 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.766 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.766 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.766 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.767 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.767 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.768 189273 DEBUG nova.virt.hardware [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.771 189273 DEBUG nova.virt.libvirt.vif [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1971201621',display_name='tempest-AttachInterfacesUnderV243Test-server-1971201621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1971201621',id=9,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO0yQV+F7bUJ9i43S8GR8OAd0yxgsoOb8NPOhNiR3uK9S9NmmHM/BRImo4Z4Aq1ynKJ4PnRN3sSq5RWnN7QeY5ydkY8mnNlSZCKT98aFK5ToiaKz/eN8dHn5gNGqJOZSsw==',key_name='tempest-keypair-1162532163',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3503f7b171c4187acaf1d66e260df45',ramdisk_id='',reservation_id='r-r91c0l9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1707587668',owner_user_name='tempest-AttachInterfacesUnderV243Test-1707587668-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:47:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d19b7a27c3e74d08af788a67b85247fc',uuid=81db0af1-e2c6-4f76-a043-9d51b0431db0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.771 189273 DEBUG nova.network.os_vif_util [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Converting VIF {"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.772 189273 DEBUG nova.network.os_vif_util [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:c8:ca,bridge_name='br-int',has_traffic_filtering=True,id=5646e04c-958a-4629-b420-730d4967f183,network=Network(40cb6b69-21d1-494d-9388-79ae29386703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5646e04c-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.773 189273 DEBUG nova.objects.instance [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81db0af1-e2c6-4f76-a043-9d51b0431db0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.796 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <uuid>81db0af1-e2c6-4f76-a043-9d51b0431db0</uuid>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <name>instance-00000009</name>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1971201621</nova:name>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:48:20</nova:creationTime>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:user uuid="d19b7a27c3e74d08af788a67b85247fc">tempest-AttachInterfacesUnderV243Test-1707587668-project-member</nova:user>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:project uuid="a3503f7b171c4187acaf1d66e260df45">tempest-AttachInterfacesUnderV243Test-1707587668</nova:project>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         <nova:port uuid="5646e04c-958a-4629-b420-730d4967f183">
Nov 22 08:48:20 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <system>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="serial">81db0af1-e2c6-4f76-a043-9d51b0431db0</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="uuid">81db0af1-e2c6-4f76-a043-9d51b0431db0</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </system>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <os>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </os>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <features>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </features>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.config"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:45:c8:ca"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <target dev="tap5646e04c-95"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/console.log" append="off"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <video>
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </video>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:48:20 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:48:20 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:48:20 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:48:20 compute-0 nova_compute[189268]: </domain>
Nov 22 08:48:20 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.798 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Preparing to wait for external event network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.798 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.799 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.799 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.800 189273 DEBUG nova.virt.libvirt.vif [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1971201621',display_name='tempest-AttachInterfacesUnderV243Test-server-1971201621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1971201621',id=9,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO0yQV+F7bUJ9i43S8GR8OAd0yxgsoOb8NPOhNiR3uK9S9NmmHM/BRImo4Z4Aq1ynKJ4PnRN3sSq5RWnN7QeY5ydkY8mnNlSZCKT98aFK5ToiaKz/eN8dHn5gNGqJOZSsw==',key_name='tempest-keypair-1162532163',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3503f7b171c4187acaf1d66e260df45',ramdisk_id='',reservation_id='r-r91c0l9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1707587668',owner_user_name='tempest-AttachInterfacesUnderV243Test-1707587668-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:47:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d19b7a27c3e74d08af788a67b85247fc',uuid=81db0af1-e2c6-4f76-a043-9d51b0431db0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.800 189273 DEBUG nova.network.os_vif_util [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Converting VIF {"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.801 189273 DEBUG nova.network.os_vif_util [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:c8:ca,bridge_name='br-int',has_traffic_filtering=True,id=5646e04c-958a-4629-b420-730d4967f183,network=Network(40cb6b69-21d1-494d-9388-79ae29386703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5646e04c-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.802 189273 DEBUG os_vif [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:c8:ca,bridge_name='br-int',has_traffic_filtering=True,id=5646e04c-958a-4629-b420-730d4967f183,network=Network(40cb6b69-21d1-494d-9388-79ae29386703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5646e04c-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.802 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.803 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.803 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.808 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.808 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5646e04c-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.809 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5646e04c-95, col_values=(('external_ids', {'iface-id': '5646e04c-958a-4629-b420-730d4967f183', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:c8:ca', 'vm-uuid': '81db0af1-e2c6-4f76-a043-9d51b0431db0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.811 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.812 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:48:20 compute-0 NetworkManager[56326]: <info>  [1763801300.8129] manager: (tap5646e04c-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.823 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.824 189273 INFO os_vif [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:c8:ca,bridge_name='br-int',has_traffic_filtering=True,id=5646e04c-958a-4629-b420-730d4967f183,network=Network(40cb6b69-21d1-494d-9388-79ae29386703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5646e04c-95')
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.909 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.909 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.909 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] No VIF found with MAC fa:16:3e:45:c8:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.910 189273 INFO nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Using config drive
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.912 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.912 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.913 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] No VIF found with MAC fa:16:3e:7a:63:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:48:20 compute-0 nova_compute[189268]: 2025-11-22 08:48:20.913 189273 INFO nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Using config drive
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.095 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.095 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.095 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.103 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4414e066-bc1a-4a63-b3a0-5e88f0553032 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:48:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:22.105 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4414e066-bc1a-4a63-b3a0-5e88f0553032 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.302 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.613 189273 INFO nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Creating config drive at /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.config
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.621 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcnazj5s3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.644 189273 INFO nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Creating config drive at /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.config
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.650 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcqwu3ibn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.669 189273 DEBUG nova.network.neutron [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Updating instance_info_cache with network_info: [{"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.748 189273 DEBUG oslo_concurrency.processutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcnazj5s3" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:22 compute-0 nova_compute[189268]: 2025-11-22 08:48:22.777 189273 DEBUG oslo_concurrency.processutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcqwu3ibn" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.052 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Releasing lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.052 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Instance network_info: |[{"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.056 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Start _get_guest_xml network_info=[{"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.065 189273 WARNING nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.071 189273 DEBUG nova.virt.libvirt.host [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.072 189273 DEBUG nova.virt.libvirt.host [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.078 189273 DEBUG nova.virt.libvirt.host [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.079 189273 DEBUG nova.virt.libvirt.host [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.079 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.079 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.080 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.080 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.081 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.081 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.082 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.082 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.082 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.083 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.083 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.083 189273 DEBUG nova.virt.hardware [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.088 189273 DEBUG nova.virt.libvirt.vif [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:48:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-928189389',display_name='tempest-ServersTestJSON-server-928189389',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-928189389',id=10,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWgBqTZ/0n46so/7K9m+j4+RGHHw3jmz3RC+7lAP0bScTbbGQKh0orPC6DKXFXm1fo2bBGjEJBCyPyL5R3nDM59OEHz9kQPOpDY4hLptHaLVkXrhnvX8tscAPcrH6ebOQ==',key_name='tempest-keypair-1869925021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='545684c5a33d4873a3184e54d562685f',ramdisk_id='',reservation_id='r-34p2j2aw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1120662526',owner_user_name='tempest-ServersTestJSON-1120662526-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:48:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d65f035f2b1b49319ad0f75cf17d724a',uuid=9f91d44e-f61c-44ca-b623-140121eb8965,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.089 189273 DEBUG nova.network.os_vif_util [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Converting VIF {"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.090 189273 DEBUG nova.network.os_vif_util [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a7:0e,bridge_name='br-int',has_traffic_filtering=True,id=363e6818-f5a5-4baa-87a9-7526c518ae95,network=Network(6fab3996-ba47-4d62-be96-e51fc77ca467),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap363e6818-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.091 189273 DEBUG nova.objects.instance [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f91d44e-f61c-44ca-b623-140121eb8965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.104 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <uuid>9f91d44e-f61c-44ca-b623-140121eb8965</uuid>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <name>instance-0000000a</name>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <nova:name>tempest-ServersTestJSON-server-928189389</nova:name>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:48:23</nova:creationTime>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:user uuid="d65f035f2b1b49319ad0f75cf17d724a">tempest-ServersTestJSON-1120662526-project-member</nova:user>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:project uuid="545684c5a33d4873a3184e54d562685f">tempest-ServersTestJSON-1120662526</nova:project>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         <nova:port uuid="363e6818-f5a5-4baa-87a9-7526c518ae95">
Nov 22 08:48:23 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <system>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <entry name="serial">9f91d44e-f61c-44ca-b623-140121eb8965</entry>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <entry name="uuid">9f91d44e-f61c-44ca-b623-140121eb8965</entry>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </system>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <os>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   </os>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <features>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   </features>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk.config"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:4c:a7:0e"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <target dev="tap363e6818-f5"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/console.log" append="off"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <video>
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </video>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:48:23 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:48:23 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:48:23 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:48:23 compute-0 nova_compute[189268]: </domain>
Nov 22 08:48:23 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.106 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Preparing to wait for external event network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.106 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.107 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.107 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.108 189273 DEBUG nova.virt.libvirt.vif [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:48:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-928189389',display_name='tempest-ServersTestJSON-server-928189389',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-928189389',id=10,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWgBqTZ/0n46so/7K9m+j4+RGHHw3jmz3RC+7lAP0bScTbbGQKh0orPC6DKXFXm1fo2bBGjEJBCyPyL5R3nDM59OEHz9kQPOpDY4hLptHaLVkXrhnvX8tscAPcrH6ebOQ==',key_name='tempest-keypair-1869925021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='545684c5a33d4873a3184e54d562685f',ramdisk_id='',reservation_id='r-34p2j2aw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1120662526',owner_user_name='tempest-ServersTestJSON-1120662526-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:48:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d65f035f2b1b49319ad0f75cf17d724a',uuid=9f91d44e-f61c-44ca-b623-140121eb8965,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.108 189273 DEBUG nova.network.os_vif_util [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Converting VIF {"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.109 189273 DEBUG nova.network.os_vif_util [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a7:0e,bridge_name='br-int',has_traffic_filtering=True,id=363e6818-f5a5-4baa-87a9-7526c518ae95,network=Network(6fab3996-ba47-4d62-be96-e51fc77ca467),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap363e6818-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.109 189273 DEBUG os_vif [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a7:0e,bridge_name='br-int',has_traffic_filtering=True,id=363e6818-f5a5-4baa-87a9-7526c518ae95,network=Network(6fab3996-ba47-4d62-be96-e51fc77ca467),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap363e6818-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.110 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.111 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.111 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.1191] manager: (tap5646e04c-95): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.118 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.119 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap363e6818-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.120 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap363e6818-f5, col_values=(('external_ids', {'iface-id': '363e6818-f5a5-4baa-87a9-7526c518ae95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:a7:0e', 'vm-uuid': '9f91d44e-f61c-44ca-b623-140121eb8965'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:23 compute-0 kernel: tap5646e04c-95: entered promiscuous mode
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.122 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.1239] manager: (tap363e6818-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.126 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00078|binding|INFO|Claiming lport 5646e04c-958a-4629-b420-730d4967f183 for this chassis.
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00079|binding|INFO|5646e04c-958a-4629-b420-730d4967f183: Claiming fa:16:3e:45:c8:ca 10.100.0.9
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.145 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.147 189273 INFO os_vif [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:a7:0e,bridge_name='br-int',has_traffic_filtering=True,id=363e6818-f5a5-4baa-87a9-7526c518ae95,network=Network(6fab3996-ba47-4d62-be96-e51fc77ca467),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap363e6818-f5')
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00080|binding|INFO|Setting lport 5646e04c-958a-4629-b420-730d4967f183 ovn-installed in OVS
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.151 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 systemd-udevd[250897]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.159 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 systemd-machined[155703]: New machine qemu-8-instance-00000009.
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.1743] device (tap5646e04c-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.1784] device (tap5646e04c-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:48:23 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000009.
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00081|binding|INFO|Setting lport 5646e04c-958a-4629-b420-730d4967f183 up in Southbound
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.278 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:c8:ca 10.100.0.9'], port_security=['fa:16:3e:45:c8:ca 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '81db0af1-e2c6-4f76-a043-9d51b0431db0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40cb6b69-21d1-494d-9388-79ae29386703', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3503f7b171c4187acaf1d66e260df45', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a269c81-10ed-4489-b2c0-d40e635cf9cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74930d9b-5b3a-4c37-ba41-b8ad01a238b4, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=5646e04c-958a-4629-b420-730d4967f183) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.280 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 5646e04c-958a-4629-b420-730d4967f183 in datapath 40cb6b69-21d1-494d-9388-79ae29386703 bound to our chassis
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.281 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40cb6b69-21d1-494d-9388-79ae29386703
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.294 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9adc80-1dd8-41bf-a396-02e364cb4c4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.295 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap40cb6b69-21 in ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.297 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap40cb6b69-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.297 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1b5fc0-b5a4-4498-a5d1-524a3fd40b1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.298 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[356dec62-3275-4a63-9bac-78acdf48286c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.310 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[25f3cd68-51f1-4f4f-96f4-a5bd25c439d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.340 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7ada88b0-a568-44db-9197-ef585619a674]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 systemd-udevd[250899]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.3903] manager: (tap3f5ad619-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Nov 22 08:48:23 compute-0 kernel: tap3f5ad619-9c: entered promiscuous mode
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.393 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00082|if_status|INFO|Not updating pb chassis for 3f5ad619-9cef-49b4-b0fd-8243d3506e32 now as sb is readonly
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.397 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[7a64ea79-61da-48c3-9b3a-2f296285c1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.403 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3763a1f5-eaf8-433f-b059-6bc139585bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.4073] manager: (tap40cb6b69-20): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.4174] device (tap3f5ad619-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.421 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.4236] device (tap3f5ad619-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.426 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.435 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[dc687ce1-5b7a-4622-a3c9-fa6136034582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.438 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[46e4edae-78c4-466d-9b9d-0e3d635c56d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 systemd-machined[155703]: New machine qemu-9-instance-00000008.
Nov 22 08:48:23 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000008.
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.4617] device (tap40cb6b69-20): carrier: link connected
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.466 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[b4892cc7-8943-47b5-be97-7c42a87d678c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.487 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[04362654-fff5-4ecc-9e31-e12479e1c269]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40cb6b69-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:e8:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641704, 'reachable_time': 31376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250956, 'error': None, 'target': 'ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.502 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[0f831a75-7fa4-4671-9ebd-07924ef03c19]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:e8cf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641704, 'tstamp': 641704}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250959, 'error': None, 'target': 'ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.518 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b700505a-ea88-4522-97e7-057e5846f338]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40cb6b69-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:e8:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641704, 'reachable_time': 31376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250961, 'error': None, 'target': 'ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.553 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[335fe7c8-7f87-4f59-9c33-ba043abf228e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00083|binding|INFO|Claiming lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 for this chassis.
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00084|binding|INFO|3f5ad619-9cef-49b4-b0fd-8243d3506e32: Claiming fa:16:3e:7a:63:17 10.100.0.14
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00085|binding|INFO|Setting lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 ovn-installed in OVS
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.571 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.622 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[29c1874e-483f-4274-92fc-8d8c67956989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.623 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40cb6b69-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.624 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.625 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40cb6b69-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.627 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 NetworkManager[56326]: <info>  [1763801303.6280] manager: (tap40cb6b69-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 22 08:48:23 compute-0 kernel: tap40cb6b69-20: entered promiscuous mode
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.634 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.637 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40cb6b69-20, col_values=(('external_ids', {'iface-id': '14593604-d14e-4f1d-99d7-97dd69b97e09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.639 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00086|binding|INFO|Releasing lport 14593604-d14e-4f1d-99d7-97dd69b97e09 from this chassis (sb_readonly=1)
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.655 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.659 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/40cb6b69-21d1-494d-9388-79ae29386703.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/40cb6b69-21d1-494d-9388-79ae29386703.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.660 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.661 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[f02c981c-97e2-4a5d-9ba3-091130be1d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.661 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-40cb6b69-21d1-494d-9388-79ae29386703
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/40cb6b69-21d1-494d-9388-79ae29386703.pid.haproxy
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 40cb6b69-21d1-494d-9388-79ae29386703
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.662 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703', 'env', 'PROCESS_TAG=haproxy-40cb6b69-21d1-494d-9388-79ae29386703', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/40cb6b69-21d1-494d-9388-79ae29386703.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:48:23 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:23.768 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:63:17 10.100.0.14'], port_security=['fa:16:3e:7a:63:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4414e066-bc1a-4a63-b3a0-5e88f0553032', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8de05c82cd5c4f7bbe156c45495011c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4307701f-74fd-4973-8f0e-4204e8ea3fdd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5195068-1343-424b-8d74-4082a6f38e4c, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=3f5ad619-9cef-49b4-b0fd-8243d3506e32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:48:23 compute-0 ovn_controller[97783]: 2025-11-22T08:48:23Z|00087|binding|INFO|Setting lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 up in Southbound
Nov 22 08:48:23 compute-0 podman[250900]: 2025-11-22 08:48:23.803210131 +0000 UTC m=+0.600821564 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.829 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.830 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.830 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] No VIF found with MAC fa:16:3e:4c:a7:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.831 189273 INFO nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Using config drive
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.929 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801303.9292138, 81db0af1-e2c6-4f76-a043-9d51b0431db0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.930 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] VM Started (Lifecycle Event)
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.948 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.955 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801303.929396, 81db0af1-e2c6-4f76-a043-9d51b0431db0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.956 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] VM Paused (Lifecycle Event)
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.975 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.982 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:23 compute-0 nova_compute[189268]: 2025-11-22 08:48:23.998 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.070 189273 DEBUG nova.compute.manager [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-changed-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.070 189273 DEBUG nova.compute.manager [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing instance network info cache due to event network-changed-5646e04c-958a-4629-b420-730d4967f183. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.071 189273 DEBUG oslo_concurrency.lockutils [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.072 189273 DEBUG oslo_concurrency.lockutils [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.072 189273 DEBUG nova.network.neutron [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing network info cache for port 5646e04c-958a-4629-b420-730d4967f183 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:24 compute-0 podman[251021]: 2025-11-22 08:48:24.065212995 +0000 UTC m=+0.031308803 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.190 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801304.1898117, 4414e066-bc1a-4a63-b3a0-5e88f0553032 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.190 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] VM Started (Lifecycle Event)
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.212 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.221 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801304.1902304, 4414e066-bc1a-4a63-b3a0-5e88f0553032 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.222 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] VM Paused (Lifecycle Event)
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.246 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.251 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.269 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:24 compute-0 podman[251021]: 2025-11-22 08:48:24.523820464 +0000 UTC m=+0.489916262 container create 22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:48:24 compute-0 systemd[1]: Started libpod-conmon-22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666.scope.
Nov 22 08:48:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ed8a6a4d0547e61f25b9e0344ab7ec8777f0727302d5cc56cea6782a9290c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.751 189273 INFO nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Creating config drive at /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk.config
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.757 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzy6hzc6h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.822 189273 DEBUG nova.compute.manager [req-3132298a-5a13-463b-8931-29b563bd0d1a req-106aa742-9f86-46f3-9206-8dd91d06930f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.823 189273 DEBUG oslo_concurrency.lockutils [req-3132298a-5a13-463b-8931-29b563bd0d1a req-106aa742-9f86-46f3-9206-8dd91d06930f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.826 189273 DEBUG oslo_concurrency.lockutils [req-3132298a-5a13-463b-8931-29b563bd0d1a req-106aa742-9f86-46f3-9206-8dd91d06930f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.827 189273 DEBUG oslo_concurrency.lockutils [req-3132298a-5a13-463b-8931-29b563bd0d1a req-106aa742-9f86-46f3-9206-8dd91d06930f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.827 189273 DEBUG nova.compute.manager [req-3132298a-5a13-463b-8931-29b563bd0d1a req-106aa742-9f86-46f3-9206-8dd91d06930f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Processing event network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.828 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Instance event wait completed in 15 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.834 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801304.8335617, a04b24d5-3478-4e5f-bb51-abf299fa4459 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.835 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] VM Resumed (Lifecycle Event)
Nov 22 08:48:24 compute-0 podman[251021]: 2025-11-22 08:48:24.838722001 +0000 UTC m=+0.804817809 container init 22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.840 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.845 189273 INFO nova.virt.libvirt.driver [-] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Instance spawned successfully.
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.846 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:48:24 compute-0 podman[251021]: 2025-11-22 08:48:24.848029152 +0000 UTC m=+0.814124950 container start 22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.867 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.875 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:24 compute-0 neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703[251044]: [NOTICE]   (251051) : New worker (251053) forked
Nov 22 08:48:24 compute-0 neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703[251044]: [NOTICE]   (251051) : Loading success.
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.880 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.880 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.881 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.881 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.882 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.882 189273 DEBUG nova.virt.libvirt.driver [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.885 189273 DEBUG oslo_concurrency.processutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzy6hzc6h" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.919 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:24 compute-0 kernel: tap363e6818-f5: entered promiscuous mode
Nov 22 08:48:24 compute-0 NetworkManager[56326]: <info>  [1763801304.9542] manager: (tap363e6818-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 22 08:48:24 compute-0 systemd-udevd[250943]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.958 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:24 compute-0 ovn_controller[97783]: 2025-11-22T08:48:24Z|00088|binding|INFO|Claiming lport 363e6818-f5a5-4baa-87a9-7526c518ae95 for this chassis.
Nov 22 08:48:24 compute-0 ovn_controller[97783]: 2025-11-22T08:48:24Z|00089|binding|INFO|363e6818-f5a5-4baa-87a9-7526c518ae95: Claiming fa:16:3e:4c:a7:0e 10.100.0.11
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.964 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:24.965 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1670 Content-Type: application/json Date: Sat, 22 Nov 2025 08:48:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-67c5ca46-c109-459c-9922-032c7ad01419 x-openstack-request-id: req-67c5ca46-c109-459c-9922-032c7ad01419 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:48:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:24.966 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4414e066-bc1a-4a63-b3a0-5e88f0553032", "name": "tempest-ServerActionsTestJSON-server-1615837079", "status": "BUILD", "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "user_id": "16843c91d66144f880a31734be4d3dee", "metadata": {}, "hostId": "cb497ba1e773e2e6462feb93636d252fa5d5837a65e831f3361fe641", "image": {"id": "ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:47:54Z", "updated": "2025-11-22T08:47:58Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4414e066-bc1a-4a63-b3a0-5e88f0553032"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4414e066-bc1a-4a63-b3a0-5e88f0553032"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": "tempest-keypair-416169958", "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--938035362"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:48:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:24.966 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4414e066-bc1a-4a63-b3a0-5e88f0553032 used request id req-67c5ca46-c109-459c-9922-032c7ad01419 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:48:24 compute-0 NetworkManager[56326]: <info>  [1763801304.9777] device (tap363e6818-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:48:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:24.977 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4414e066-bc1a-4a63-b3a0-5e88f0553032', 'name': 'tempest-ServerActionsTestJSON-server-1615837079', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': '8de05c82cd5c4f7bbe156c45495011c2', 'user_id': '16843c91d66144f880a31734be4d3dee', 'hostId': 'cb497ba1e773e2e6462feb93636d252fa5d5837a65e831f3361fe641', 'status': 'paused', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:48:24 compute-0 NetworkManager[56326]: <info>  [1763801304.9793] device (tap363e6818-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:48:24 compute-0 nova_compute[189268]: 2025-11-22 08:48:24.982 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:24 compute-0 ovn_controller[97783]: 2025-11-22T08:48:24Z|00090|binding|INFO|Setting lport 363e6818-f5a5-4baa-87a9-7526c518ae95 ovn-installed in OVS
Nov 22 08:48:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:24.982 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 81db0af1-e2c6-4f76-a043-9d51b0431db0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:48:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:24.984 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/81db0af1-e2c6-4f76-a043-9d51b0431db0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:48:25 compute-0 systemd-machined[155703]: New machine qemu-10-instance-0000000a.
Nov 22 08:48:25 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.094 189273 DEBUG nova.network.neutron [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updated VIF entry in instance network info cache for port 3f5ad619-9cef-49b4-b0fd-8243d3506e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.095 189273 DEBUG nova.network.neutron [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updating instance_info_cache with network_info: [{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.112 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:a7:0e 10.100.0.11'], port_security=['fa:16:3e:4c:a7:0e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9f91d44e-f61c-44ca-b623-140121eb8965', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fab3996-ba47-4d62-be96-e51fc77ca467', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '545684c5a33d4873a3184e54d562685f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be733b51-89d2-4915-bff5-02710932177b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec853b59-ccb0-4017-a731-dfff3e782d8f, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=363e6818-f5a5-4baa-87a9-7526c518ae95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:48:25 compute-0 ovn_controller[97783]: 2025-11-22T08:48:25Z|00091|binding|INFO|Setting lport 363e6818-f5a5-4baa-87a9-7526c518ae95 up in Southbound
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.125 189273 INFO nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Took 27.87 seconds to spawn the instance on the hypervisor.
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.126 189273 DEBUG nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.128 189273 DEBUG oslo_concurrency.lockutils [req-97b6e6c1-fc9a-461b-ad95-ec0d464d7f58 req-ac75ce76-0cc7-4925-a88f-48bc9e2b68da 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.210 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 3f5ad619-9cef-49b4-b0fd-8243d3506e32 in datapath 3485ad45-c98a-4c02-b9a2-34cc945b16d2 unbound from our chassis
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.213 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3485ad45-c98a-4c02-b9a2-34cc945b16d2
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.229 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[4839d0ff-75f6-441a-8721-2f9c295d71b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.231 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3485ad45-c1 in ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.234 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3485ad45-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.234 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f4c852-12fa-4007-92df-249bd2163caa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.235 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a38d876d-c985-46d2-884a-28f8a7fead9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.254 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[e4500ef0-4bfd-4069-9fed-d3228fd430a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.265 189273 INFO nova.compute.manager [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Took 29.05 seconds to build instance.
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.271 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[6a152b61-fad6-4e1a-8eb4-a3c29c548610]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.301 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[10a46d81-b448-496d-9b65-8d6af0323e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.308 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[422ffab5-6ed4-4e6e-a3b9-579e9e0dafe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 NetworkManager[56326]: <info>  [1763801305.3091] manager: (tap3485ad45-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.310 189273 DEBUG oslo_concurrency.lockutils [None req-62747cca-ad47-428b-bef4-887bbf37aa44 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.343 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[46bda677-5598-495d-82b8-167e0629a028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.353 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[97085637-522c-4b09-ae26-5b67e518a2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 NetworkManager[56326]: <info>  [1763801305.3766] device (tap3485ad45-c0): carrier: link connected
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.382 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[cb48483a-18aa-4f05-816b-b0e06cdbe8a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.401 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[febd74ae-c21f-411b-b9f2-d9144cfaafd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3485ad45-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:9a:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641896, 'reachable_time': 17387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251094, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.417 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[55aa6c20-f64c-4406-bbc4-25397a2587e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:9af2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641896, 'tstamp': 641896}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251095, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.436 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b7daddbf-eaed-4cf7-bcb4-8c7adf715b92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3485ad45-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:9a:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641896, 'reachable_time': 17387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251096, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.471 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[efef92a7-fc60-4926-8bd7-1bfafdbeca37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.539 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[63de62ae-54a5-42c6-b504-ff16b83aaa4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.544 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3485ad45-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.544 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.545 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3485ad45-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:25 compute-0 NetworkManager[56326]: <info>  [1763801305.5502] manager: (tap3485ad45-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 22 08:48:25 compute-0 kernel: tap3485ad45-c0: entered promiscuous mode
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.551 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.555 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3485ad45-c0, col_values=(('external_ids', {'iface-id': '37fb22bb-e01c-451f-a2d2-26ee384f1620'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.560 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:25 compute-0 ovn_controller[97783]: 2025-11-22T08:48:25Z|00092|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.563 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3485ad45-c98a-4c02-b9a2-34cc945b16d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3485ad45-c98a-4c02-b9a2-34cc945b16d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.564 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a104e106-d213-47c3-a11d-febf6d684214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.566 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-3485ad45-c98a-4c02-b9a2-34cc945b16d2
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/3485ad45-c98a-4c02-b9a2-34cc945b16d2.pid.haproxy
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 3485ad45-c98a-4c02-b9a2-34cc945b16d2
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:48:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:25.570 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'env', 'PROCESS_TAG=haproxy-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3485ad45-c98a-4c02-b9a2-34cc945b16d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.582 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.752 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801305.7522352, 9f91d44e-f61c-44ca-b623-140121eb8965 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.757 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] VM Started (Lifecycle Event)
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.789 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.796 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801305.7523813, 9f91d44e-f61c-44ca-b623-140121eb8965 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.797 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] VM Paused (Lifecycle Event)
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.815 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.822 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:25 compute-0 nova_compute[189268]: 2025-11-22 08:48:25.841 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:26 compute-0 podman[251136]: 2025-11-22 08:48:25.980144358 +0000 UTC m=+0.032289209 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.198 189273 DEBUG nova.network.neutron [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updated VIF entry in instance network info cache for port 5646e04c-958a-4629-b420-730d4967f183. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.199 189273 DEBUG nova.network.neutron [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.223 189273 DEBUG oslo_concurrency.lockutils [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.223 189273 DEBUG nova.compute.manager [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-changed-363e6818-f5a5-4baa-87a9-7526c518ae95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.224 189273 DEBUG nova.compute.manager [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Refreshing instance network info cache due to event network-changed-363e6818-f5a5-4baa-87a9-7526c518ae95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.224 189273 DEBUG oslo_concurrency.lockutils [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.225 189273 DEBUG oslo_concurrency.lockutils [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.225 189273 DEBUG nova.network.neutron [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Refreshing network info cache for port 363e6818-f5a5-4baa-87a9-7526c518ae95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:26 compute-0 podman[251136]: 2025-11-22 08:48:26.306881843 +0000 UTC m=+0.359026674 container create 4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:48:26 compute-0 systemd[1]: Started libpod-conmon-4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e.scope.
Nov 22 08:48:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ac45aed571e6cee9809f16606508038752bbce8a2db1f13c38a64182a964cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:48:26 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:26.478 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1680 Content-Type: application/json Date: Sat, 22 Nov 2025 08:48:24 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c6726b53-6fa4-4d2f-8cdd-d1adb1d4777a x-openstack-request-id: req-c6726b53-6fa4-4d2f-8cdd-d1adb1d4777a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:48:26 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:26.478 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "81db0af1-e2c6-4f76-a043-9d51b0431db0", "name": "tempest-AttachInterfacesUnderV243Test-server-1971201621", "status": "BUILD", "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "user_id": "d19b7a27c3e74d08af788a67b85247fc", "metadata": {}, "hostId": "98703f577fec049d4acc8e4543cb12cbe3b24611ec667fff7d9d6e23", "image": {"id": "ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:47:54Z", "updated": "2025-11-22T08:47:59Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/81db0af1-e2c6-4f76-a043-9d51b0431db0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/81db0af1-e2c6-4f76-a043-9d51b0431db0"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": "tempest-keypair-1162532163", "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1879328261"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:48:26 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:26.478 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/81db0af1-e2c6-4f76-a043-9d51b0431db0 used request id req-c6726b53-6fa4-4d2f-8cdd-d1adb1d4777a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:48:26 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:26.480 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '81db0af1-e2c6-4f76-a043-9d51b0431db0', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1971201621', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': 'a3503f7b171c4187acaf1d66e260df45', 'user_id': 'd19b7a27c3e74d08af788a67b85247fc', 'hostId': '98703f577fec049d4acc8e4543cb12cbe3b24611ec667fff7d9d6e23', 'status': 'paused', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:48:26 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:26.482 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a04b24d5-3478-4e5f-bb51-abf299fa4459 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:48:26 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:26.483 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a04b24d5-3478-4e5f-bb51-abf299fa4459 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:48:26 compute-0 podman[251136]: 2025-11-22 08:48:26.523104466 +0000 UTC m=+0.575249317 container init 4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 08:48:26 compute-0 podman[251136]: 2025-11-22 08:48:26.533096965 +0000 UTC m=+0.585241806 container start 4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 08:48:26 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[251160]: [NOTICE]   (251177) : New worker (251179) forked
Nov 22 08:48:26 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[251160]: [NOTICE]   (251177) : Loading success.
Nov 22 08:48:26 compute-0 podman[251148]: 2025-11-22 08:48:26.569341899 +0000 UTC m=+0.220858078 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.620 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 363e6818-f5a5-4baa-87a9-7526c518ae95 in datapath 6fab3996-ba47-4d62-be96-e51fc77ca467 unbound from our chassis
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.623 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fab3996-ba47-4d62-be96-e51fc77ca467
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.636 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd69ad4-5bb7-4ed8-8a1c-6f77e9d0d469]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.644 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fab3996-b1 in ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.649 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fab3996-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.649 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[2c084a3f-abe8-4fd5-9e80-794bc439a952]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.652 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5392f05d-2784-46aa-bdfb-4674f1d46ec0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.667 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[05d92c4b-f7c7-44ce-a245-42db6ae16ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.686 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[4a604fbd-1ff0-4fdc-8a0e-9133afe7732f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.717 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[167197f4-3839-4936-bd7e-3ee2583dc5e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 NetworkManager[56326]: <info>  [1763801306.7283] manager: (tap6fab3996-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.737 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e431244b-dbf8-41be-80bd-5bafcdc59e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 systemd-udevd[251196]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.771 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[533667c4-d980-4582-9358-1722510a55fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.776 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[2c57412f-7c66-44aa-b861-60f84780e255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 NetworkManager[56326]: <info>  [1763801306.8100] device (tap6fab3996-b0): carrier: link connected
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.815 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[93eadcfd-42b9-4a6d-94bd-9fa55f75d69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.834 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca59122-a84e-44c5-8f14-6dd1cb03b20c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fab3996-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:fc:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642039, 'reachable_time': 38784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251215, 'error': None, 'target': 'ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.852 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[6c299738-aad9-485e-8714-6f571cd065aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:fcaf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642039, 'tstamp': 642039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251216, 'error': None, 'target': 'ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.868 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[8b30cda4-c34c-4d2c-a837-e8a617e1dafe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fab3996-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:fc:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642039, 'reachable_time': 38784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251217, 'error': None, 'target': 'ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.903 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4db285-6547-441c-bedb-ea8e48f49b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.967 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[43f64bb5-10c6-450f-80f6-fcbd73803202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.969 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fab3996-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.970 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.971 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fab3996-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.974 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:26 compute-0 kernel: tap6fab3996-b0: entered promiscuous mode
Nov 22 08:48:26 compute-0 NetworkManager[56326]: <info>  [1763801306.9757] manager: (tap6fab3996-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.978 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.980 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fab3996-b0, col_values=(('external_ids', {'iface-id': '408492d7-9155-4d2b-8e8a-15c1eda4ae9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.981 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:26 compute-0 ovn_controller[97783]: 2025-11-22T08:48:26Z|00093|binding|INFO|Releasing lport 408492d7-9155-4d2b-8e8a-15c1eda4ae9f from this chassis (sb_readonly=0)
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.996 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:26.997 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fab3996-ba47-4d62-be96-e51fc77ca467.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fab3996-ba47-4d62-be96-e51fc77ca467.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:48:26 compute-0 nova_compute[189268]: 2025-11-22 08:48:26.999 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:27.000 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b581f683-30c8-484a-b9d0-ca35d1817205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:27.001 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-6fab3996-ba47-4d62-be96-e51fc77ca467
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/6fab3996-ba47-4d62-be96-e51fc77ca467.pid.haproxy
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 6fab3996-ba47-4d62-be96-e51fc77ca467
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:48:27 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:27.002 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467', 'env', 'PROCESS_TAG=haproxy-6fab3996-ba47-4d62-be96-e51fc77ca467', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fab3996-ba47-4d62-be96-e51fc77ca467.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:48:27 compute-0 nova_compute[189268]: 2025-11-22 08:48:27.305 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:27 compute-0 podman[251249]: 2025-11-22 08:48:27.407842623 +0000 UTC m=+0.027949133 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:48:27 compute-0 podman[251249]: 2025-11-22 08:48:27.629000939 +0000 UTC m=+0.249107419 container create 6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 08:48:27 compute-0 systemd[1]: Started libpod-conmon-6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16.scope.
Nov 22 08:48:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:48:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bab12c3bf858de61a4a69d7af5068af658f9f8df00f67ba00947dc6db1a114/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:48:27 compute-0 podman[251249]: 2025-11-22 08:48:27.815643296 +0000 UTC m=+0.435749796 container init 6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:48:27 compute-0 podman[251249]: 2025-11-22 08:48:27.823721634 +0000 UTC m=+0.443828134 container start 6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:48:27 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [NOTICE]   (251268) : New worker (251270) forked
Nov 22 08:48:27 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [NOTICE]   (251268) : Loading success.
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.123 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.755 189273 DEBUG nova.compute.manager [req-dfa2cc38-54de-48e4-be67-a599dddedfe8 req-01618be1-60d4-4b99-a23e-5078dfc2a183 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.756 189273 DEBUG oslo_concurrency.lockutils [req-dfa2cc38-54de-48e4-be67-a599dddedfe8 req-01618be1-60d4-4b99-a23e-5078dfc2a183 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.756 189273 DEBUG oslo_concurrency.lockutils [req-dfa2cc38-54de-48e4-be67-a599dddedfe8 req-01618be1-60d4-4b99-a23e-5078dfc2a183 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.757 189273 DEBUG oslo_concurrency.lockutils [req-dfa2cc38-54de-48e4-be67-a599dddedfe8 req-01618be1-60d4-4b99-a23e-5078dfc2a183 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.757 189273 DEBUG nova.compute.manager [req-dfa2cc38-54de-48e4-be67-a599dddedfe8 req-01618be1-60d4-4b99-a23e-5078dfc2a183 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Processing event network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.758 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.764 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801308.7646873, 81db0af1-e2c6-4f76-a043-9d51b0431db0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.765 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] VM Resumed (Lifecycle Event)
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.767 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.780 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.784 189273 INFO nova.virt.libvirt.driver [-] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Instance spawned successfully.
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.784 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.786 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.807 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.813 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.814 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.815 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.815 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.816 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:28 compute-0 nova_compute[189268]: 2025-11-22 08:48:28.817 189273 DEBUG nova.virt.libvirt.driver [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:29 compute-0 nova_compute[189268]: 2025-11-22 08:48:29.408 189273 INFO nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Took 29.46 seconds to spawn the instance on the hypervisor.
Nov 22 08:48:29 compute-0 nova_compute[189268]: 2025-11-22 08:48:29.409 189273 DEBUG nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:29 compute-0 nova_compute[189268]: 2025-11-22 08:48:29.445 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:29 compute-0 nova_compute[189268]: 2025-11-22 08:48:29.527 189273 INFO nova.compute.manager [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Took 31.83 seconds to build instance.
Nov 22 08:48:29 compute-0 nova_compute[189268]: 2025-11-22 08:48:29.549 189273 DEBUG oslo_concurrency.lockutils [None req-5c8944bb-be12-4cc5-ae0d-e999908fe9e4 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 32.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:29 compute-0 podman[203476]: time="2025-11-22T08:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:48:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33220 "" "Go-http-client/1.1"
Nov 22 08:48:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6182 "" "Go-http-client/1.1"
Nov 22 08:48:29 compute-0 nova_compute[189268]: 2025-11-22 08:48:29.838 189273 DEBUG nova.network.neutron [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Updated VIF entry in instance network info cache for port 363e6818-f5a5-4baa-87a9-7526c518ae95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:29 compute-0 nova_compute[189268]: 2025-11-22 08:48:29.839 189273 DEBUG nova.network.neutron [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Updating instance_info_cache with network_info: [{"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.889 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1894 Content-Type: application/json Date: Sat, 22 Nov 2025 08:48:26 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-59acfd2d-e2d7-41b1-a25b-af94ecdda084 x-openstack-request-id: req-59acfd2d-e2d7-41b1-a25b-af94ecdda084 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.890 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a04b24d5-3478-4e5f-bb51-abf299fa4459", "name": "tempest-ServersTestManualDisk-server-1220793961", "status": "ACTIVE", "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "user_id": "5fe0ae1f27fc4a9ea04dde879cc50cba", "metadata": {"hello": "world"}, "hostId": "e244f60136a20eaaaac02782cf3148eea5a479376f9f3f485e0d1196", "image": {"id": "ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:47:50Z", "updated": "2025-11-22T08:48:25Z", "addresses": {"tempest-ServersTestManualDisk-890547167-network": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3c:b1:72"}]}, "accessIPv4": "1.1.1.1", "accessIPv6": "::babe:dc0c:1602", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a04b24d5-3478-4e5f-bb51-abf299fa4459"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a04b24d5-3478-4e5f-bb51-abf299fa4459"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-884669752", "OS-SRV-USG:launched_at": "2025-11-22T08:48:25.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--86729514"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.890 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a04b24d5-3478-4e5f-bb51-abf299fa4459 used request id req-59acfd2d-e2d7-41b1-a25b-af94ecdda084 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.891 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a04b24d5-3478-4e5f-bb51-abf299fa4459', 'name': 'tempest-ServersTestManualDisk-server-1220793961', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '21dde3ab59bc4d5c890712c19e1b5ec8', 'user_id': '5fe0ae1f27fc4a9ea04dde879cc50cba', 'hostId': 'e244f60136a20eaaaac02782cf3148eea5a479376f9f3f485e0d1196', 'status': 'active', 'metadata': {'hello': 'world'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.892 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.892 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.892 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.893 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:48:29.892959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.898 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4414e066-bc1a-4a63-b3a0-5e88f0553032 / tap3f5ad619-9c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.899 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.902 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 81db0af1-e2c6-4f76-a043-9d51b0431db0 / tap5646e04c-95 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.903 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.906 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a04b24d5-3478-4e5f-bb51-abf299fa4459 / tapfbd5a3ad-e5 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.907 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.907 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.908 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.908 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.908 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.908 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.909 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.909 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:48:29.909060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.910 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.910 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.911 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.911 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.911 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.911 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.912 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.912 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:48:29.912405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.913 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.913 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.914 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.914 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.914 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.915 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.915 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.915 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.916 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:48:29.916001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.916 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.917 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.917 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.918 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.918 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.918 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.918 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.919 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.919 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:48:29.919492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.943 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.966 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/cpu volume: 1100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.990 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/cpu volume: 4940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.991 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.992 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.992 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.993 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.994 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:48:29.993850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.995 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.995 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.996 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.997 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.997 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.998 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.998 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.998 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.999 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:29 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:29.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:48:29.998910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.004 189273 DEBUG oslo_concurrency.lockutils [req-c81618a1-e1e6-465a-abaf-a3c0eecdac0d req-b881f4cd-ca8a-46c5-ac3d-2a1dc956dd7c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.013 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.014 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.045 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.045 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.059 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.059 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.060 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.060 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.060 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.060 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.061 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.061 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:48:30.061372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.093 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.094 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.141 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.read.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.141 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.182 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.183 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.183 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.184 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:48:30.185068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.185 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.185 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.186 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.186 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.187 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.187 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.187 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.188 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.188 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.188 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.189 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:48:30.188965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.189 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.190 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.190 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.190 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.191 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.191 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.191 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.192 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:48:30.192039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.192 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.192 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1615837079>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1971201621>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1220793961>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1615837079>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1971201621>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1220793961>]
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.193 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.193 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.194 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.194 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:48:30.194431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.195 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.195 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.195 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.read.latency volume: 35755842 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.196 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.read.latency volume: 51133064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.196 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.read.latency volume: 2702375075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.196 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.read.latency volume: 30742507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.197 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.197 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.198 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.198 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.198 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:48:30.198635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.199 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.199 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.200 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.200 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.201 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.201 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.201 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:48:30.201732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.202 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.202 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.203 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.203 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.203 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.204 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.204 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.204 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.205 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.205 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.205 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.205 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:48:30.205594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.206 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.206 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.206 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.207 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.207 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.208 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.208 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.209 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.209 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.209 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:48:30.209679) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.210 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.210 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.211 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.211 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.211 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.211 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.212 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.213 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.213 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.214 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:48:30.213952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.214 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.215 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.215 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.215 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.216 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.216 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.217 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.217 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.217 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.217 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.218 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.218 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:48:30.217931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.218 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.219 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.219 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.220 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.220 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:48:30.220608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.221 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.221 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.221 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.222 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.222 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.222 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.223 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.223 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.223 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.224 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.224 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:48:30.224486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.225 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.225 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.225 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.226 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.226 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.226 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.227 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:48:30.226490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.227 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.227 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.227 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.228 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.228 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.228 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.229 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:48:30.228989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.230 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.230 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.231 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.231 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:48:30.231218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.231 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.232 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.232 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.232 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.233 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.233 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.233 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.233 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.234 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:48:30.234026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.234 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.234 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.235 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.235 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.236 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.236 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.236 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.236 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.237 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:48:30.236904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.237 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.237 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1615837079>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1971201621>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1220793961>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1615837079>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1971201621>, <NovaLikeServer: tempest-ServersTestManualDisk-server-1220793961>]
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.237 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.238 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.238 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.238 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:48:30.238938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 4414e066-bc1a-4a63-b3a0-5e88f0553032: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 15 DEBUG ceilometer.compute.pollsters [-] 81db0af1-e2c6-4f76-a043-9d51b0431db0/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 81db0af1-e2c6-4f76-a043-9d51b0431db0: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 15 DEBUG ceilometer.compute.pollsters [-] a04b24d5-3478-4e5f-bb51-abf299fa4459/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.239 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance a04b24d5-3478-4e5f-bb51-abf299fa4459: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.240 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.246 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:48:30.246 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.966 189273 DEBUG nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.966 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.967 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.967 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.968 189273 DEBUG nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] No waiting events found dispatching network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.968 189273 WARNING nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received unexpected event network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c for instance with vm_state active and task_state None.
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.969 189273 DEBUG nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.969 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.970 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.970 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.970 189273 DEBUG nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Processing event network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.971 189273 DEBUG nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.971 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.972 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.972 189273 DEBUG oslo_concurrency.lockutils [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.973 189273 DEBUG nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] No waiting events found dispatching network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.973 189273 WARNING nova.compute.manager [req-63433f69-add0-4f41-ae8d-9d0b0f5fd3dd req-82bf1feb-249f-49fc-952c-c7b2d133f9c3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received unexpected event network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 for instance with vm_state building and task_state spawning.
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.975 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.981 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.982 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801310.982266, 9f91d44e-f61c-44ca-b623-140121eb8965 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.983 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] VM Resumed (Lifecycle Event)
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.989 189273 INFO nova.virt.libvirt.driver [-] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Instance spawned successfully.
Nov 22 08:48:30 compute-0 nova_compute[189268]: 2025-11-22 08:48:30.989 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.006 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.017 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.023 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.024 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.024 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.025 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.026 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.026 189273 DEBUG nova.virt.libvirt.driver [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.050 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.256 189273 INFO nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Took 28.16 seconds to spawn the instance on the hypervisor.
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.257 189273 DEBUG nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:31 compute-0 openstack_network_exporter[205661]: ERROR   08:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:48:31 compute-0 openstack_network_exporter[205661]: ERROR   08:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:48:31 compute-0 openstack_network_exporter[205661]: ERROR   08:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:48:31 compute-0 openstack_network_exporter[205661]: ERROR   08:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:48:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:48:31 compute-0 openstack_network_exporter[205661]: ERROR   08:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:48:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.429 189273 INFO nova.compute.manager [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Took 29.23 seconds to build instance.
Nov 22 08:48:31 compute-0 nova_compute[189268]: 2025-11-22 08:48:31.645 189273 DEBUG oslo_concurrency.lockutils [None req-ffe0df3e-fe46-4475-97e3-b370f540dea5 d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.074 189273 DEBUG nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.075 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.076 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.077 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.078 189273 DEBUG nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] No waiting events found dispatching network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.078 189273 WARNING nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received unexpected event network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 for instance with vm_state active and task_state None.
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.079 189273 DEBUG nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.079 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.080 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.080 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.081 189273 DEBUG nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Processing event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.081 189273 DEBUG nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.082 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.082 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.083 189273 DEBUG oslo_concurrency.lockutils [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.084 189273 DEBUG nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] No waiting events found dispatching network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.084 189273 WARNING nova.compute.manager [req-809d6e7b-c6ad-4441-b9c2-c38d56acbaa1 req-4f3ea8d3-1ba1-4b8f-a58f-2eea0f65d93c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received unexpected event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 for instance with vm_state building and task_state spawning.
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.085 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.090 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801312.0900598, 4414e066-bc1a-4a63-b3a0-5e88f0553032 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.091 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] VM Resumed (Lifecycle Event)
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.094 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.103 189273 INFO nova.virt.libvirt.driver [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance spawned successfully.
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.103 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.112 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.123 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.131 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.132 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.133 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.134 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.135 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.135 189273 DEBUG nova.virt.libvirt.driver [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.140 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.269 189273 INFO nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Took 33.69 seconds to spawn the instance on the hypervisor.
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.270 189273 DEBUG nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.308 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.334 189273 INFO nova.compute.manager [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Took 35.14 seconds to build instance.
Nov 22 08:48:32 compute-0 nova_compute[189268]: 2025-11-22 08:48:32.351 189273 DEBUG oslo_concurrency.lockutils [None req-8c7fba2f-8ece-4b46-a48c-0e27a38572b6 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 35.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:33 compute-0 nova_compute[189268]: 2025-11-22 08:48:33.127 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.182 189273 DEBUG nova.compute.manager [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-changed-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.184 189273 DEBUG nova.compute.manager [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Refreshing instance network info cache due to event network-changed-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.184 189273 DEBUG oslo_concurrency.lockutils [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.185 189273 DEBUG oslo_concurrency.lockutils [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.185 189273 DEBUG nova.network.neutron [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Refreshing network info cache for port fbd5a3ad-e519-4a3f-ab67-99a00166bd4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.982 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.983 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.983 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.984 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.984 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.986 189273 INFO nova.compute.manager [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Terminating instance
Nov 22 08:48:34 compute-0 nova_compute[189268]: 2025-11-22 08:48:34.987 189273 DEBUG nova.compute.manager [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:48:35 compute-0 kernel: tapfbd5a3ad-e5 (unregistering): left promiscuous mode
Nov 22 08:48:35 compute-0 NetworkManager[56326]: <info>  [1763801315.0205] device (tapfbd5a3ad-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.039 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:35 compute-0 ovn_controller[97783]: 2025-11-22T08:48:35Z|00094|binding|INFO|Releasing lport fbd5a3ad-e519-4a3f-ab67-99a00166bd4c from this chassis (sb_readonly=0)
Nov 22 08:48:35 compute-0 ovn_controller[97783]: 2025-11-22T08:48:35Z|00095|binding|INFO|Setting lport fbd5a3ad-e519-4a3f-ab67-99a00166bd4c down in Southbound
Nov 22 08:48:35 compute-0 ovn_controller[97783]: 2025-11-22T08:48:35Z|00096|binding|INFO|Removing iface tapfbd5a3ad-e5 ovn-installed in OVS
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.058 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.060 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.068 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:b1:72 10.100.0.4'], port_security=['fa:16:3e:3c:b1:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a04b24d5-3478-4e5f-bb51-abf299fa4459', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1d6d43d-5b47-494d-a955-bb769150c95d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21dde3ab59bc4d5c890712c19e1b5ec8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '025c2d69-95c4-4db4-b22f-bb23cfb7a649', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed50ec9b-74d3-4ca4-8425-6eb8a7e767c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.070 106642 INFO neutron.agent.ovn.metadata.agent [-] Port fbd5a3ad-e519-4a3f-ab67-99a00166bd4c in datapath c1d6d43d-5b47-494d-a955-bb769150c95d unbound from our chassis
Nov 22 08:48:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 22 08:48:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 10.740s CPU time.
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.074 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1d6d43d-5b47-494d-a955-bb769150c95d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.076 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[da54b3fb-2ef0-4562-b4c6-d2f5b81d61f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.077 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d namespace which is not needed anymore
Nov 22 08:48:35 compute-0 systemd-machined[155703]: Machine qemu-7-instance-00000007 terminated.
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.278 189273 INFO nova.virt.libvirt.driver [-] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Instance destroyed successfully.
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.279 189273 DEBUG nova.objects.instance [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lazy-loading 'resources' on Instance uuid a04b24d5-3478-4e5f-bb51-abf299fa4459 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.293 189273 DEBUG nova.virt.libvirt.vif [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:47:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1220793961',display_name='tempest-ServersTestManualDisk-server-1220793961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1220793961',id=7,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMHSXyv5/0Hlx4i0OgKhjpEjPmuanREUsyUDnTJ7rFrTHFiHEnaLMIfwHDH01Ks8d9pDlbN2I8RDvKuUXlCzQJWqREG2cSupdPUUp/0yrCSVVH27nlxpF76AAlKTR9RoYA==',key_name='tempest-keypair-884669752',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:48:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21dde3ab59bc4d5c890712c19e1b5ec8',ramdisk_id='',reservation_id='r-cfysm7ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1224175633',owner_user_name='tempest-ServersTestManualDisk-1224175633-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:48:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5fe0ae1f27fc4a9ea04dde879cc50cba',uuid=a04b24d5-3478-4e5f-bb51-abf299fa4459,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.293 189273 DEBUG nova.network.os_vif_util [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Converting VIF {"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.294 189273 DEBUG nova.network.os_vif_util [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:b1:72,bridge_name='br-int',has_traffic_filtering=True,id=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c,network=Network(c1d6d43d-5b47-494d-a955-bb769150c95d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbd5a3ad-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.294 189273 DEBUG os_vif [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:b1:72,bridge_name='br-int',has_traffic_filtering=True,id=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c,network=Network(c1d6d43d-5b47-494d-a955-bb769150c95d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbd5a3ad-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.296 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.296 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5a3ad-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.307 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.310 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.317 189273 INFO os_vif [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:b1:72,bridge_name='br-int',has_traffic_filtering=True,id=fbd5a3ad-e519-4a3f-ab67-99a00166bd4c,network=Network(c1d6d43d-5b47-494d-a955-bb769150c95d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbd5a3ad-e5')
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.318 189273 INFO nova.virt.libvirt.driver [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Deleting instance files /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459_del
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.319 189273 INFO nova.virt.libvirt.driver [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Deletion of /var/lib/nova/instances/a04b24d5-3478-4e5f-bb51-abf299fa4459_del complete
Nov 22 08:48:35 compute-0 neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d[250765]: [NOTICE]   (250769) : haproxy version is 2.8.14-c23fe91
Nov 22 08:48:35 compute-0 neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d[250765]: [NOTICE]   (250769) : path to executable is /usr/sbin/haproxy
Nov 22 08:48:35 compute-0 neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d[250765]: [WARNING]  (250769) : Exiting Master process...
Nov 22 08:48:35 compute-0 neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d[250765]: [ALERT]    (250769) : Current worker (250771) exited with code 143 (Terminated)
Nov 22 08:48:35 compute-0 neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d[250765]: [WARNING]  (250769) : All workers exited. Exiting... (0)
Nov 22 08:48:35 compute-0 systemd[1]: libpod-b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f.scope: Deactivated successfully.
Nov 22 08:48:35 compute-0 podman[251302]: 2025-11-22 08:48:35.336947738 +0000 UTC m=+0.131436505 container died b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 08:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f-userdata-shm.mount: Deactivated successfully.
Nov 22 08:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-257ae6d208d29ac6186cef476f519546c2e95e4e67f96056000ddacddf325dbd-merged.mount: Deactivated successfully.
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.395 189273 INFO nova.compute.manager [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Took 0.41 seconds to destroy the instance on the hypervisor.
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.395 189273 DEBUG oslo.service.loopingcall [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.395 189273 DEBUG nova.compute.manager [-] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.396 189273 DEBUG nova.network.neutron [-] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:48:35 compute-0 podman[251302]: 2025-11-22 08:48:35.417953296 +0000 UTC m=+0.212442063 container cleanup b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:48:35 compute-0 podman[251330]: 2025-11-22 08:48:35.419308753 +0000 UTC m=+0.121114517 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:48:35 compute-0 podman[251329]: 2025-11-22 08:48:35.42068561 +0000 UTC m=+0.124770726 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:48:35 compute-0 systemd[1]: libpod-conmon-b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f.scope: Deactivated successfully.
Nov 22 08:48:35 compute-0 podman[251331]: 2025-11-22 08:48:35.474863477 +0000 UTC m=+0.175324356 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 22 08:48:35 compute-0 podman[251397]: 2025-11-22 08:48:35.523614588 +0000 UTC m=+0.068701229 container remove b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.533 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3b5e4b-4a4c-48e0-853c-4cc46927f743]: (4, ('Sat Nov 22 08:48:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d (b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f)\nb0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f\nSat Nov 22 08:48:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d (b0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f)\nb0280d202b8715c8f32d7a4e6960cb1f2325c66f1ead1ebf888135ce27a01c6f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.538 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[25961616-ddeb-4915-9ae3-248100a5e200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.540 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1d6d43d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:35 compute-0 kernel: tapc1d6d43d-50: left promiscuous mode
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.547 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.562 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.564 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[38f396fa-d418-432d-b54c-e4fe54dd425a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.580 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[991f868f-b706-440e-9bb6-be1925c1b83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.582 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[03469627-15e9-49db-8096-50959392a050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.603 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[504d6eaf-78bf-4a29-9577-f4f70795e1ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640249, 'reachable_time': 16452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251414, 'error': None, 'target': 'ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 systemd[1]: run-netns-ovnmeta\x2dc1d6d43d\x2d5b47\x2d494d\x2da955\x2dbb769150c95d.mount: Deactivated successfully.
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.607 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1d6d43d-5b47-494d-a955-bb769150c95d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:48:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:35.607 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8cf604-3c10-4836-a836-97571e6073fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.743 189273 DEBUG nova.compute.manager [req-dc949f61-9f6b-4017-ac1a-ce84dc0f8288 req-c7c710da-30ef-4255-82ac-0cc7366ec73b 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-vif-unplugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.744 189273 DEBUG oslo_concurrency.lockutils [req-dc949f61-9f6b-4017-ac1a-ce84dc0f8288 req-c7c710da-30ef-4255-82ac-0cc7366ec73b 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.744 189273 DEBUG oslo_concurrency.lockutils [req-dc949f61-9f6b-4017-ac1a-ce84dc0f8288 req-c7c710da-30ef-4255-82ac-0cc7366ec73b 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.744 189273 DEBUG oslo_concurrency.lockutils [req-dc949f61-9f6b-4017-ac1a-ce84dc0f8288 req-c7c710da-30ef-4255-82ac-0cc7366ec73b 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.745 189273 DEBUG nova.compute.manager [req-dc949f61-9f6b-4017-ac1a-ce84dc0f8288 req-c7c710da-30ef-4255-82ac-0cc7366ec73b 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] No waiting events found dispatching network-vif-unplugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:35 compute-0 nova_compute[189268]: 2025-11-22 08:48:35.745 189273 DEBUG nova.compute.manager [req-dc949f61-9f6b-4017-ac1a-ce84dc0f8288 req-c7c710da-30ef-4255-82ac-0cc7366ec73b 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-vif-unplugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:48:36 compute-0 nova_compute[189268]: 2025-11-22 08:48:36.310 189273 DEBUG nova.network.neutron [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Updated VIF entry in instance network info cache for port fbd5a3ad-e519-4a3f-ab67-99a00166bd4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:36 compute-0 nova_compute[189268]: 2025-11-22 08:48:36.311 189273 DEBUG nova.network.neutron [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Updating instance_info_cache with network_info: [{"id": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "address": "fa:16:3e:3c:b1:72", "network": {"id": "c1d6d43d-5b47-494d-a955-bb769150c95d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-890547167-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21dde3ab59bc4d5c890712c19e1b5ec8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbd5a3ad-e5", "ovs_interfaceid": "fbd5a3ad-e519-4a3f-ab67-99a00166bd4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:36 compute-0 nova_compute[189268]: 2025-11-22 08:48:36.336 189273 DEBUG oslo_concurrency.lockutils [req-77d9e52f-9764-4baa-934c-1a3a33689276 req-7e96e300-165e-4c8a-8f0c-881ed64e3aae 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-a04b24d5-3478-4e5f-bb51-abf299fa4459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:37 compute-0 ovn_controller[97783]: 2025-11-22T08:48:37Z|00097|memory|INFO|peak resident set size grew 54% in last 2452.2 seconds, from 16128 kB to 24876 kB
Nov 22 08:48:37 compute-0 ovn_controller[97783]: 2025-11-22T08:48:37Z|00098|memory|INFO|idl-cells-OVN_Southbound:11679 idl-cells-Open_vSwitch:927 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:421 lflow-cache-entries-cache-matches:300 lflow-cache-size-KB:1698 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:720 ofctrl_installed_flow_usage-KB:525 ofctrl_sb_flow_ref_usage-KB:270
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.024 189273 DEBUG nova.compute.manager [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-changed-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.024 189273 DEBUG nova.compute.manager [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing instance network info cache due to event network-changed-5646e04c-958a-4629-b420-730d4967f183. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.024 189273 DEBUG oslo_concurrency.lockutils [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.024 189273 DEBUG oslo_concurrency.lockutils [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.025 189273 DEBUG nova.network.neutron [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing network info cache for port 5646e04c-958a-4629-b420-730d4967f183 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.312 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.832 189273 DEBUG nova.compute.manager [req-5e1311c2-00da-483b-9a23-240ebaae9cfb req-5d12e895-668d-49b5-ab4f-e3d72cd1b968 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.832 189273 DEBUG oslo_concurrency.lockutils [req-5e1311c2-00da-483b-9a23-240ebaae9cfb req-5d12e895-668d-49b5-ab4f-e3d72cd1b968 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.832 189273 DEBUG oslo_concurrency.lockutils [req-5e1311c2-00da-483b-9a23-240ebaae9cfb req-5d12e895-668d-49b5-ab4f-e3d72cd1b968 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.833 189273 DEBUG oslo_concurrency.lockutils [req-5e1311c2-00da-483b-9a23-240ebaae9cfb req-5d12e895-668d-49b5-ab4f-e3d72cd1b968 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.833 189273 DEBUG nova.compute.manager [req-5e1311c2-00da-483b-9a23-240ebaae9cfb req-5d12e895-668d-49b5-ab4f-e3d72cd1b968 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] No waiting events found dispatching network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:37 compute-0 nova_compute[189268]: 2025-11-22 08:48:37.833 189273 WARNING nova.compute.manager [req-5e1311c2-00da-483b-9a23-240ebaae9cfb req-5d12e895-668d-49b5-ab4f-e3d72cd1b968 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received unexpected event network-vif-plugged-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c for instance with vm_state active and task_state deleting.
Nov 22 08:48:38 compute-0 ovn_controller[97783]: 2025-11-22T08:48:38Z|00099|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:48:38 compute-0 ovn_controller[97783]: 2025-11-22T08:48:38Z|00100|binding|INFO|Releasing lport 408492d7-9155-4d2b-8e8a-15c1eda4ae9f from this chassis (sb_readonly=0)
Nov 22 08:48:38 compute-0 ovn_controller[97783]: 2025-11-22T08:48:38Z|00101|binding|INFO|Releasing lport 14593604-d14e-4f1d-99d7-97dd69b97e09 from this chassis (sb_readonly=0)
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.329 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.391 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.391 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.392 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.392 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.392 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.394 189273 INFO nova.compute.manager [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Terminating instance
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.396 189273 DEBUG nova.compute.manager [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:48:38 compute-0 kernel: tap363e6818-f5 (unregistering): left promiscuous mode
Nov 22 08:48:38 compute-0 NetworkManager[56326]: <info>  [1763801318.4680] device (tap363e6818-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:48:38 compute-0 ovn_controller[97783]: 2025-11-22T08:48:38Z|00102|binding|INFO|Releasing lport 363e6818-f5a5-4baa-87a9-7526c518ae95 from this chassis (sb_readonly=0)
Nov 22 08:48:38 compute-0 ovn_controller[97783]: 2025-11-22T08:48:38Z|00103|binding|INFO|Setting lport 363e6818-f5a5-4baa-87a9-7526c518ae95 down in Southbound
Nov 22 08:48:38 compute-0 ovn_controller[97783]: 2025-11-22T08:48:38Z|00104|binding|INFO|Removing iface tap363e6818-f5 ovn-installed in OVS
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.479 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.490 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:38 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 22 08:48:38 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 8.210s CPU time.
Nov 22 08:48:38 compute-0 systemd-machined[155703]: Machine qemu-10-instance-0000000a terminated.
Nov 22 08:48:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:38.583 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:a7:0e 10.100.0.11'], port_security=['fa:16:3e:4c:a7:0e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9f91d44e-f61c-44ca-b623-140121eb8965', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fab3996-ba47-4d62-be96-e51fc77ca467', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '545684c5a33d4873a3184e54d562685f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be733b51-89d2-4915-bff5-02710932177b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec853b59-ccb0-4017-a731-dfff3e782d8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=363e6818-f5a5-4baa-87a9-7526c518ae95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:48:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:38.586 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 363e6818-f5a5-4baa-87a9-7526c518ae95 in datapath 6fab3996-ba47-4d62-be96-e51fc77ca467 unbound from our chassis
Nov 22 08:48:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:38.588 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fab3996-ba47-4d62-be96-e51fc77ca467, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:48:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:38.590 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d3b13e-8a30-4cc3-a312-b3e948117eb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:38.590 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467 namespace which is not needed anymore
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.619 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.626 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.643 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.644 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.644 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.671 189273 INFO nova.virt.libvirt.driver [-] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Instance destroyed successfully.
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.671 189273 DEBUG nova.objects.instance [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lazy-loading 'resources' on Instance uuid 9f91d44e-f61c-44ca-b623-140121eb8965 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.692 189273 DEBUG nova.virt.libvirt.vif [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:48:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-928189389',display_name='tempest-ServersTestJSON-server-928189389',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-928189389',id=10,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCWgBqTZ/0n46so/7K9m+j4+RGHHw3jmz3RC+7lAP0bScTbbGQKh0orPC6DKXFXm1fo2bBGjEJBCyPyL5R3nDM59OEHz9kQPOpDY4hLptHaLVkXrhnvX8tscAPcrH6ebOQ==',key_name='tempest-keypair-1869925021',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:48:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='545684c5a33d4873a3184e54d562685f',ramdisk_id='',reservation_id='r-34p2j2aw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1120662526',owner_user_name='tempest-ServersTestJSON-1120662526-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:48:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d65f035f2b1b49319ad0f75cf17d724a',uuid=9f91d44e-f61c-44ca-b623-140121eb8965,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.692 189273 DEBUG nova.network.os_vif_util [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Converting VIF {"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.693 189273 DEBUG nova.network.os_vif_util [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4c:a7:0e,bridge_name='br-int',has_traffic_filtering=True,id=363e6818-f5a5-4baa-87a9-7526c518ae95,network=Network(6fab3996-ba47-4d62-be96-e51fc77ca467),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap363e6818-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.693 189273 DEBUG os_vif [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:a7:0e,bridge_name='br-int',has_traffic_filtering=True,id=363e6818-f5a5-4baa-87a9-7526c518ae95,network=Network(6fab3996-ba47-4d62-be96-e51fc77ca467),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap363e6818-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.695 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.695 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap363e6818-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.697 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.702 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.704 189273 INFO os_vif [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:a7:0e,bridge_name='br-int',has_traffic_filtering=True,id=363e6818-f5a5-4baa-87a9-7526c518ae95,network=Network(6fab3996-ba47-4d62-be96-e51fc77ca467),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap363e6818-f5')
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.705 189273 INFO nova.virt.libvirt.driver [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Deleting instance files /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965_del
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.705 189273 INFO nova.virt.libvirt.driver [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Deletion of /var/lib/nova/instances/9f91d44e-f61c-44ca-b623-140121eb8965_del complete
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.790 189273 INFO nova.compute.manager [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.792 189273 DEBUG oslo.service.loopingcall [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.794 189273 DEBUG nova.compute.manager [-] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:48:38 compute-0 nova_compute[189268]: 2025-11-22 08:48:38.795 189273 DEBUG nova.network.neutron [-] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:48:38 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [NOTICE]   (251268) : haproxy version is 2.8.14-c23fe91
Nov 22 08:48:38 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [NOTICE]   (251268) : path to executable is /usr/sbin/haproxy
Nov 22 08:48:38 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [WARNING]  (251268) : Exiting Master process...
Nov 22 08:48:38 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [WARNING]  (251268) : Exiting Master process...
Nov 22 08:48:38 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [ALERT]    (251268) : Current worker (251270) exited with code 143 (Terminated)
Nov 22 08:48:38 compute-0 neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467[251264]: [WARNING]  (251268) : All workers exited. Exiting... (0)
Nov 22 08:48:38 compute-0 systemd[1]: libpod-6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16.scope: Deactivated successfully.
Nov 22 08:48:38 compute-0 podman[251453]: 2025-11-22 08:48:38.814902624 +0000 UTC m=+0.079613211 container died 6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16-userdata-shm.mount: Deactivated successfully.
Nov 22 08:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-88bab12c3bf858de61a4a69d7af5068af658f9f8df00f67ba00947dc6db1a114-merged.mount: Deactivated successfully.
Nov 22 08:48:38 compute-0 podman[251453]: 2025-11-22 08:48:38.892883061 +0000 UTC m=+0.157593638 container cleanup 6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:48:38 compute-0 systemd[1]: libpod-conmon-6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16.scope: Deactivated successfully.
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:39 compute-0 podman[251483]: 2025-11-22 08:48:39.128499116 +0000 UTC m=+0.201299144 container remove 6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.136 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bdb805-1f20-47bc-bc62-5a75ae045457]: (4, ('Sat Nov 22 08:48:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467 (6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16)\n6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16\nSat Nov 22 08:48:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467 (6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16)\n6cae06f2e4f32de914c2b9faad6429a98da05d7fec10345c87513043f5eded16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.143 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa14da2-8765-4866-9b88-9ae2811cca13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.146 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fab3996-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:48:39 compute-0 kernel: tap6fab3996-b0: left promiscuous mode
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.152 189273 DEBUG nova.compute.manager [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-changed-363e6818-f5a5-4baa-87a9-7526c518ae95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.152 189273 DEBUG nova.compute.manager [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Refreshing instance network info cache due to event network-changed-363e6818-f5a5-4baa-87a9-7526c518ae95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.153 189273 DEBUG oslo_concurrency.lockutils [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.153 189273 DEBUG oslo_concurrency.lockutils [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.153 189273 DEBUG nova.network.neutron [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Refreshing network info cache for port 363e6818-f5a5-4baa-87a9-7526c518ae95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.154 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.177 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.180 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e8534e-3f2d-436f-b0d6-042854c11e3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.190 189273 DEBUG nova.network.neutron [-] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.196 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[2d34c490-a6ab-41ef-a24d-98886066ded4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.198 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[187241b8-6e9e-43c7-b541-42d4594f37bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.209 189273 INFO nova.compute.manager [-] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Took 3.81 seconds to deallocate network for instance.
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.226 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[06f48e37-c5fd-4e8e-9143-18019986060a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642029, 'reachable_time': 26112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251495, 'error': None, 'target': 'ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.230 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fab3996-ba47-4d62-be96-e51fc77ca467 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:48:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d6fab3996\x2dba47\x2d4d62\x2dbe96\x2de51fc77ca467.mount: Deactivated successfully.
Nov 22 08:48:39 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:48:39.230 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b245c0-2c69-44f7-bbd9-41edb020e9ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.265 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.265 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.380 189273 DEBUG nova.compute.provider_tree [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.401 189273 DEBUG nova.scheduler.client.report [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.496 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.803 189273 INFO nova.scheduler.client.report [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Deleted allocations for instance a04b24d5-3478-4e5f-bb51-abf299fa4459
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.903 189273 DEBUG oslo_concurrency.lockutils [None req-73fef2b0-fe0f-4f32-9b37-0e6c0c9dba28 5fe0ae1f27fc4a9ea04dde879cc50cba 21dde3ab59bc4d5c890712c19e1b5ec8 - - default default] Lock "a04b24d5-3478-4e5f-bb51-abf299fa4459" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.987 189273 DEBUG nova.compute.manager [req-326c483d-aa3c-408a-aa3c-860ebfb89d25 req-fd7b45cd-4ba6-4d18-b037-4515f1ccc3f9 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-vif-unplugged-363e6818-f5a5-4baa-87a9-7526c518ae95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.987 189273 DEBUG oslo_concurrency.lockutils [req-326c483d-aa3c-408a-aa3c-860ebfb89d25 req-fd7b45cd-4ba6-4d18-b037-4515f1ccc3f9 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.988 189273 DEBUG oslo_concurrency.lockutils [req-326c483d-aa3c-408a-aa3c-860ebfb89d25 req-fd7b45cd-4ba6-4d18-b037-4515f1ccc3f9 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.988 189273 DEBUG oslo_concurrency.lockutils [req-326c483d-aa3c-408a-aa3c-860ebfb89d25 req-fd7b45cd-4ba6-4d18-b037-4515f1ccc3f9 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.988 189273 DEBUG nova.compute.manager [req-326c483d-aa3c-408a-aa3c-860ebfb89d25 req-fd7b45cd-4ba6-4d18-b037-4515f1ccc3f9 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] No waiting events found dispatching network-vif-unplugged-363e6818-f5a5-4baa-87a9-7526c518ae95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:39 compute-0 nova_compute[189268]: 2025-11-22 08:48:39.988 189273 DEBUG nova.compute.manager [req-326c483d-aa3c-408a-aa3c-860ebfb89d25 req-fd7b45cd-4ba6-4d18-b037-4515f1ccc3f9 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-vif-unplugged-363e6818-f5a5-4baa-87a9-7526c518ae95 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:48:40 compute-0 nova_compute[189268]: 2025-11-22 08:48:40.920 189273 DEBUG nova.network.neutron [-] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:40 compute-0 nova_compute[189268]: 2025-11-22 08:48:40.938 189273 INFO nova.compute.manager [-] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Took 2.14 seconds to deallocate network for instance.
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.000 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.001 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.101 189273 DEBUG nova.compute.provider_tree [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.123 189273 DEBUG nova.scheduler.client.report [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.144 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.210 189273 INFO nova.scheduler.client.report [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Deleted allocations for instance 9f91d44e-f61c-44ca-b623-140121eb8965
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.304 189273 DEBUG nova.compute.manager [req-bedf432c-c4ab-4197-8a45-22beb1820a91 req-34fe4b31-38d3-4934-abbe-2c992d828fec 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Received event network-vif-deleted-fbd5a3ad-e519-4a3f-ab67-99a00166bd4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.307 189273 DEBUG oslo_concurrency.lockutils [None req-92ba559b-3151-4e7e-91e0-7c63d62c3a2f d65f035f2b1b49319ad0f75cf17d724a 545684c5a33d4873a3184e54d562685f - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.612 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.612 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.612 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.612 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.967 189273 DEBUG nova.network.neutron [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updated VIF entry in instance network info cache for port 5646e04c-958a-4629-b420-730d4967f183. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.968 189273 DEBUG nova.network.neutron [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:41 compute-0 nova_compute[189268]: 2025-11-22 08:48:41.989 189273 DEBUG oslo_concurrency.lockutils [req-392cf55c-291e-43fb-a854-fe34e896a40d req-d2024f50-9316-4e7b-9dbe-c9d05639e8fb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.082 189273 DEBUG nova.compute.manager [req-64103337-251e-45b7-a37a-7862cc57b28e req-ed747590-ed89-4380-858e-12cff8e67151 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.082 189273 DEBUG oslo_concurrency.lockutils [req-64103337-251e-45b7-a37a-7862cc57b28e req-ed747590-ed89-4380-858e-12cff8e67151 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.083 189273 DEBUG oslo_concurrency.lockutils [req-64103337-251e-45b7-a37a-7862cc57b28e req-ed747590-ed89-4380-858e-12cff8e67151 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.083 189273 DEBUG oslo_concurrency.lockutils [req-64103337-251e-45b7-a37a-7862cc57b28e req-ed747590-ed89-4380-858e-12cff8e67151 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "9f91d44e-f61c-44ca-b623-140121eb8965-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.084 189273 DEBUG nova.compute.manager [req-64103337-251e-45b7-a37a-7862cc57b28e req-ed747590-ed89-4380-858e-12cff8e67151 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] No waiting events found dispatching network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.084 189273 WARNING nova.compute.manager [req-64103337-251e-45b7-a37a-7862cc57b28e req-ed747590-ed89-4380-858e-12cff8e67151 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received unexpected event network-vif-plugged-363e6818-f5a5-4baa-87a9-7526c518ae95 for instance with vm_state deleted and task_state None.
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.084 189273 DEBUG nova.compute.manager [req-64103337-251e-45b7-a37a-7862cc57b28e req-ed747590-ed89-4380-858e-12cff8e67151 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Received event network-vif-deleted-363e6818-f5a5-4baa-87a9-7526c518ae95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:42 compute-0 nova_compute[189268]: 2025-11-22 08:48:42.315 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:43 compute-0 nova_compute[189268]: 2025-11-22 08:48:43.123 189273 DEBUG nova.network.neutron [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Updated VIF entry in instance network info cache for port 363e6818-f5a5-4baa-87a9-7526c518ae95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:43 compute-0 nova_compute[189268]: 2025-11-22 08:48:43.125 189273 DEBUG nova.network.neutron [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Updating instance_info_cache with network_info: [{"id": "363e6818-f5a5-4baa-87a9-7526c518ae95", "address": "fa:16:3e:4c:a7:0e", "network": {"id": "6fab3996-ba47-4d62-be96-e51fc77ca467", "bridge": "br-int", "label": "tempest-ServersTestJSON-1394044478-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "545684c5a33d4873a3184e54d562685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap363e6818-f5", "ovs_interfaceid": "363e6818-f5a5-4baa-87a9-7526c518ae95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:43 compute-0 nova_compute[189268]: 2025-11-22 08:48:43.195 189273 DEBUG oslo_concurrency.lockutils [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-9f91d44e-f61c-44ca-b623-140121eb8965" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:43 compute-0 nova_compute[189268]: 2025-11-22 08:48:43.198 189273 DEBUG nova.compute.manager [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-changed-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:48:43 compute-0 nova_compute[189268]: 2025-11-22 08:48:43.199 189273 DEBUG nova.compute.manager [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Refreshing instance network info cache due to event network-changed-3f5ad619-9cef-49b4-b0fd-8243d3506e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:48:43 compute-0 nova_compute[189268]: 2025-11-22 08:48:43.200 189273 DEBUG oslo_concurrency.lockutils [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:48:43 compute-0 nova_compute[189268]: 2025-11-22 08:48:43.699 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:45 compute-0 podman[251496]: 2025-11-22 08:48:45.132595477 +0000 UTC m=+0.086173677 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:48:45 compute-0 podman[251497]: 2025-11-22 08:48:45.138568129 +0000 UTC m=+0.086262861 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.153 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.335 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updating instance_info_cache with network_info: [{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.355 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.356 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.357 189273 DEBUG oslo_concurrency.lockutils [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.358 189273 DEBUG nova.network.neutron [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Refreshing network info cache for port 3f5ad619-9cef-49b4-b0fd-8243d3506e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.359 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.360 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:46 compute-0 nova_compute[189268]: 2025-11-22 08:48:46.360 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:47 compute-0 ovn_controller[97783]: 2025-11-22T08:48:47Z|00105|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:48:47 compute-0 ovn_controller[97783]: 2025-11-22T08:48:47Z|00106|binding|INFO|Releasing lport 14593604-d14e-4f1d-99d7-97dd69b97e09 from this chassis (sb_readonly=0)
Nov 22 08:48:47 compute-0 nova_compute[189268]: 2025-11-22 08:48:47.325 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:47 compute-0 nova_compute[189268]: 2025-11-22 08:48:47.330 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:48 compute-0 nova_compute[189268]: 2025-11-22 08:48:48.703 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:49 compute-0 nova_compute[189268]: 2025-11-22 08:48:49.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:49 compute-0 podman[251535]: 2025-11-22 08:48:49.140011939 +0000 UTC m=+0.090933446 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, architecture=x86_64, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, version=9.4, managed_by=edpm_ansible)
Nov 22 08:48:49 compute-0 podman[251536]: 2025-11-22 08:48:49.184589576 +0000 UTC m=+0.128919106 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 08:48:49 compute-0 ovn_controller[97783]: 2025-11-22T08:48:49Z|00107|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:48:49 compute-0 ovn_controller[97783]: 2025-11-22T08:48:49Z|00108|binding|INFO|Releasing lport 14593604-d14e-4f1d-99d7-97dd69b97e09 from this chassis (sb_readonly=0)
Nov 22 08:48:49 compute-0 nova_compute[189268]: 2025-11-22 08:48:49.987 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:50 compute-0 nova_compute[189268]: 2025-11-22 08:48:50.272 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801315.2712524, a04b24d5-3478-4e5f-bb51-abf299fa4459 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:50 compute-0 nova_compute[189268]: 2025-11-22 08:48:50.273 189273 INFO nova.compute.manager [-] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] VM Stopped (Lifecycle Event)
Nov 22 08:48:50 compute-0 nova_compute[189268]: 2025-11-22 08:48:50.287 189273 DEBUG nova.network.neutron [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updated VIF entry in instance network info cache for port 3f5ad619-9cef-49b4-b0fd-8243d3506e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:48:50 compute-0 nova_compute[189268]: 2025-11-22 08:48:50.288 189273 DEBUG nova.network.neutron [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updating instance_info_cache with network_info: [{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:48:50 compute-0 nova_compute[189268]: 2025-11-22 08:48:50.291 189273 DEBUG nova.compute.manager [None req-4848c8e8-52a9-4222-a024-af6dee3bfe73 - - - - - -] [instance: a04b24d5-3478-4e5f-bb51-abf299fa4459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:50 compute-0 nova_compute[189268]: 2025-11-22 08:48:50.300 189273 DEBUG oslo_concurrency.lockutils [req-281bb9f4-39c5-4dc3-a6c8-3774352b5f0e req-a23134a1-8d86-4e7b-876e-7c976599eb75 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:48:52 compute-0 nova_compute[189268]: 2025-11-22 08:48:52.328 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:53 compute-0 nova_compute[189268]: 2025-11-22 08:48:53.669 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801318.6667082, 9f91d44e-f61c-44ca-b623-140121eb8965 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:48:53 compute-0 nova_compute[189268]: 2025-11-22 08:48:53.670 189273 INFO nova.compute.manager [-] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] VM Stopped (Lifecycle Event)
Nov 22 08:48:53 compute-0 nova_compute[189268]: 2025-11-22 08:48:53.688 189273 DEBUG nova.compute.manager [None req-63feefd7-61d5-4436-b4f8-fe09ddb5fadf - - - - - -] [instance: 9f91d44e-f61c-44ca-b623-140121eb8965] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:48:53 compute-0 nova_compute[189268]: 2025-11-22 08:48:53.708 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:53 compute-0 nova_compute[189268]: 2025-11-22 08:48:53.775 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:53 compute-0 ovn_controller[97783]: 2025-11-22T08:48:53Z|00109|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:48:53 compute-0 ovn_controller[97783]: 2025-11-22T08:48:53Z|00110|binding|INFO|Releasing lport 14593604-d14e-4f1d-99d7-97dd69b97e09 from this chassis (sb_readonly=0)
Nov 22 08:48:53 compute-0 nova_compute[189268]: 2025-11-22 08:48:53.993 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:54 compute-0 podman[251580]: 2025-11-22 08:48:54.15621317 +0000 UTC m=+0.110190204 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.131 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.132 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.133 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.133 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.226 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.308 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.310 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.385 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.397 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.471 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.472 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.540 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.926 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.928 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5088MB free_disk=72.45904159545898GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.929 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:48:56 compute-0 nova_compute[189268]: 2025-11-22 08:48:56.929 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.023 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4414e066-bc1a-4a63-b3a0-5e88f0553032 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.024 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 81db0af1-e2c6-4f76-a043-9d51b0431db0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.024 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.025 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:48:57 compute-0 podman[251614]: 2025-11-22 08:48:57.114088404 +0000 UTC m=+0.069121440 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.127 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.157 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.189 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.190 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:48:57 compute-0 nova_compute[189268]: 2025-11-22 08:48:57.331 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:58 compute-0 nova_compute[189268]: 2025-11-22 08:48:58.441 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:58 compute-0 nova_compute[189268]: 2025-11-22 08:48:58.711 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:48:59 compute-0 podman[203476]: time="2025-11-22T08:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:48:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30754 "" "Go-http-client/1.1"
Nov 22 08:48:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5272 "" "Go-http-client/1.1"
Nov 22 08:49:01 compute-0 openstack_network_exporter[205661]: ERROR   08:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:49:01 compute-0 openstack_network_exporter[205661]: ERROR   08:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:49:01 compute-0 openstack_network_exporter[205661]: ERROR   08:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:49:01 compute-0 openstack_network_exporter[205661]: ERROR   08:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:49:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:49:01 compute-0 openstack_network_exporter[205661]: ERROR   08:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:49:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:49:02 compute-0 nova_compute[189268]: 2025-11-22 08:49:02.334 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:03 compute-0 nova_compute[189268]: 2025-11-22 08:49:03.715 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:06 compute-0 podman[251655]: 2025-11-22 08:49:06.12294139 +0000 UTC m=+0.080546797 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:49:06 compute-0 podman[251656]: 2025-11-22 08:49:06.132845016 +0000 UTC m=+0.078102491 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:49:06 compute-0 podman[251657]: 2025-11-22 08:49:06.169402789 +0000 UTC m=+0.105832356 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 08:49:07 compute-0 nova_compute[189268]: 2025-11-22 08:49:07.336 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:07 compute-0 ovn_controller[97783]: 2025-11-22T08:49:07Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:c8:ca 10.100.0.9
Nov 22 08:49:07 compute-0 ovn_controller[97783]: 2025-11-22T08:49:07Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:c8:ca 10.100.0.9
Nov 22 08:49:08 compute-0 nova_compute[189268]: 2025-11-22 08:49:08.720 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:09.990 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:09.991 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:09.992 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:10 compute-0 ovn_controller[97783]: 2025-11-22T08:49:10Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:63:17 10.100.0.14
Nov 22 08:49:10 compute-0 ovn_controller[97783]: 2025-11-22T08:49:10Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:63:17 10.100.0.14
Nov 22 08:49:12 compute-0 nova_compute[189268]: 2025-11-22 08:49:12.339 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:13 compute-0 nova_compute[189268]: 2025-11-22 08:49:13.724 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.489 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "94198e9a-a485-4010-9e92-6132c12413f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.489 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.523 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.622 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.623 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.634 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.635 189273 INFO nova.compute.claims [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.842 189273 DEBUG nova.compute.provider_tree [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.860 189273 DEBUG nova.scheduler.client.report [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.895 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.897 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.949 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.951 189273 DEBUG nova.network.neutron [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.973 189273 INFO nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:49:14 compute-0 nova_compute[189268]: 2025-11-22 08:49:14.991 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.095 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.098 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.098 189273 INFO nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Creating image(s)
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.099 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "/var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.100 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "/var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.101 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "/var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.119 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.189 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.190 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.191 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.203 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.270 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.271 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.318 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.319 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.320 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.382 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.383 189273 DEBUG nova.virt.disk.api [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Checking if we can resize image /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.385 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.431 189273 DEBUG nova.policy [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '056ede5a6ff04739bec29b1558f65499', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c47de2cb590748e6a379da2c77fe03df', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.460 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.461 189273 DEBUG nova.virt.disk.api [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Cannot resize image /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.462 189273 DEBUG nova.objects.instance [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lazy-loading 'migration_context' on Instance uuid 94198e9a-a485-4010-9e92-6132c12413f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.477 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.478 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Ensure instance console log exists: /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.479 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.479 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:15 compute-0 nova_compute[189268]: 2025-11-22 08:49:15.480 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:16 compute-0 podman[251740]: 2025-11-22 08:49:16.121971541 +0000 UTC m=+0.068779960 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:49:16 compute-0 podman[251739]: 2025-11-22 08:49:16.123936434 +0000 UTC m=+0.074800812 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 08:49:17 compute-0 nova_compute[189268]: 2025-11-22 08:49:17.250 189273 DEBUG nova.network.neutron [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Successfully created port: b37205f4-d490-4b94-8deb-1db878ab597a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:49:17 compute-0 nova_compute[189268]: 2025-11-22 08:49:17.342 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:18 compute-0 nova_compute[189268]: 2025-11-22 08:49:18.729 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:18 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:18.734 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:49:18 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:18.735 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:49:18 compute-0 nova_compute[189268]: 2025-11-22 08:49:18.737 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:18 compute-0 nova_compute[189268]: 2025-11-22 08:49:18.845 189273 DEBUG nova.network.neutron [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Successfully updated port: b37205f4-d490-4b94-8deb-1db878ab597a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:49:18 compute-0 nova_compute[189268]: 2025-11-22 08:49:18.904 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:18 compute-0 nova_compute[189268]: 2025-11-22 08:49:18.905 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquired lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:18 compute-0 nova_compute[189268]: 2025-11-22 08:49:18.905 189273 DEBUG nova.network.neutron [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:49:19 compute-0 nova_compute[189268]: 2025-11-22 08:49:19.207 189273 DEBUG nova.compute.manager [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received event network-changed-b37205f4-d490-4b94-8deb-1db878ab597a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:19 compute-0 nova_compute[189268]: 2025-11-22 08:49:19.207 189273 DEBUG nova.compute.manager [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Refreshing instance network info cache due to event network-changed-b37205f4-d490-4b94-8deb-1db878ab597a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:49:19 compute-0 nova_compute[189268]: 2025-11-22 08:49:19.208 189273 DEBUG oslo_concurrency.lockutils [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:19 compute-0 nova_compute[189268]: 2025-11-22 08:49:19.304 189273 DEBUG nova.network.neutron [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:49:20 compute-0 podman[251780]: 2025-11-22 08:49:20.158656208 +0000 UTC m=+0.102702192 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, config_id=edpm, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, build-date=2024-09-18T21:23:30)
Nov 22 08:49:20 compute-0 podman[251781]: 2025-11-22 08:49:20.161494424 +0000 UTC m=+0.102706122 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller)
Nov 22 08:49:20 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:20.737 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.325 189273 DEBUG nova.network.neutron [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Updating instance_info_cache with network_info: [{"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.484 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Releasing lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.484 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Instance network_info: |[{"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.484 189273 DEBUG oslo_concurrency.lockutils [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.485 189273 DEBUG nova.network.neutron [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Refreshing network info cache for port b37205f4-d490-4b94-8deb-1db878ab597a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.488 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Start _get_guest_xml network_info=[{"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.494 189273 WARNING nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.502 189273 DEBUG nova.virt.libvirt.host [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.503 189273 DEBUG nova.virt.libvirt.host [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.510 189273 DEBUG nova.virt.libvirt.host [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.510 189273 DEBUG nova.virt.libvirt.host [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.511 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.511 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.511 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.512 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.512 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.512 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.512 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.512 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.513 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.513 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.513 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.513 189273 DEBUG nova.virt.hardware [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.516 189273 DEBUG nova.virt.libvirt.vif [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:49:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-997365885',display_name='tempest-TestServerBasicOps-server-997365885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-997365885',id=11,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLcYvGNYl23mihoJLTfvE0IiYo3x2gxuZswcDN3C9+US21VTdIP/lsfNQ9GDLAttRATHAuOf6pUBP+qoE3j4vwOTOhZLaw5In/EmWAhgL9G+Ls4Z8R14o3Gu6x4a5/U0tA==',key_name='tempest-TestServerBasicOps-190901822',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c47de2cb590748e6a379da2c77fe03df',ramdisk_id='',reservation_id='r-0twfn3s0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-685598289',owner_user_name='tempest-TestServerBasicOps-685598289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:49:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='056ede5a6ff04739bec29b1558f65499',uuid=94198e9a-a485-4010-9e92-6132c12413f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.517 189273 DEBUG nova.network.os_vif_util [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Converting VIF {"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.518 189273 DEBUG nova.network.os_vif_util [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:79:78,bridge_name='br-int',has_traffic_filtering=True,id=b37205f4-d490-4b94-8deb-1db878ab597a,network=Network(aa8fe5d7-0d24-412a-ac01-d2a96241587e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb37205f4-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.518 189273 DEBUG nova.objects.instance [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lazy-loading 'pci_devices' on Instance uuid 94198e9a-a485-4010-9e92-6132c12413f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.530 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <uuid>94198e9a-a485-4010-9e92-6132c12413f2</uuid>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <name>instance-0000000b</name>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <nova:name>tempest-TestServerBasicOps-server-997365885</nova:name>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:49:21</nova:creationTime>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:user uuid="056ede5a6ff04739bec29b1558f65499">tempest-TestServerBasicOps-685598289-project-member</nova:user>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:project uuid="c47de2cb590748e6a379da2c77fe03df">tempest-TestServerBasicOps-685598289</nova:project>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         <nova:port uuid="b37205f4-d490-4b94-8deb-1db878ab597a">
Nov 22 08:49:21 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <system>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <entry name="serial">94198e9a-a485-4010-9e92-6132c12413f2</entry>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <entry name="uuid">94198e9a-a485-4010-9e92-6132c12413f2</entry>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </system>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <os>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   </os>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <features>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   </features>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk.config"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:54:79:78"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <target dev="tapb37205f4-d4"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/console.log" append="off"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <video>
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </video>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:49:21 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:49:21 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:49:21 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:49:21 compute-0 nova_compute[189268]: </domain>
Nov 22 08:49:21 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.530 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Preparing to wait for external event network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.531 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "94198e9a-a485-4010-9e92-6132c12413f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.531 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.531 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.532 189273 DEBUG nova.virt.libvirt.vif [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:49:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-997365885',display_name='tempest-TestServerBasicOps-server-997365885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-997365885',id=11,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLcYvGNYl23mihoJLTfvE0IiYo3x2gxuZswcDN3C9+US21VTdIP/lsfNQ9GDLAttRATHAuOf6pUBP+qoE3j4vwOTOhZLaw5In/EmWAhgL9G+Ls4Z8R14o3Gu6x4a5/U0tA==',key_name='tempest-TestServerBasicOps-190901822',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c47de2cb590748e6a379da2c77fe03df',ramdisk_id='',reservation_id='r-0twfn3s0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-685598289',owner_user_name='tempest-TestServerBasicOps-685598289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:49:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='056ede5a6ff04739bec29b1558f65499',uuid=94198e9a-a485-4010-9e92-6132c12413f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.532 189273 DEBUG nova.network.os_vif_util [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Converting VIF {"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.533 189273 DEBUG nova.network.os_vif_util [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:79:78,bridge_name='br-int',has_traffic_filtering=True,id=b37205f4-d490-4b94-8deb-1db878ab597a,network=Network(aa8fe5d7-0d24-412a-ac01-d2a96241587e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb37205f4-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.533 189273 DEBUG os_vif [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:79:78,bridge_name='br-int',has_traffic_filtering=True,id=b37205f4-d490-4b94-8deb-1db878ab597a,network=Network(aa8fe5d7-0d24-412a-ac01-d2a96241587e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb37205f4-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.534 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.534 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.535 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.538 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.538 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb37205f4-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.539 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb37205f4-d4, col_values=(('external_ids', {'iface-id': 'b37205f4-d490-4b94-8deb-1db878ab597a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:79:78', 'vm-uuid': '94198e9a-a485-4010-9e92-6132c12413f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:21 compute-0 NetworkManager[56326]: <info>  [1763801361.5416] manager: (tapb37205f4-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.543 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.549 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.550 189273 INFO os_vif [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:79:78,bridge_name='br-int',has_traffic_filtering=True,id=b37205f4-d490-4b94-8deb-1db878ab597a,network=Network(aa8fe5d7-0d24-412a-ac01-d2a96241587e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb37205f4-d4')
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.673 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.673 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.674 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] No VIF found with MAC fa:16:3e:54:79:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:49:21 compute-0 nova_compute[189268]: 2025-11-22 08:49:21.674 189273 INFO nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Using config drive
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.226 189273 INFO nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Creating config drive at /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk.config
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.233 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpihytq_05 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.345 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.362 189273 DEBUG oslo_concurrency.processutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpihytq_05" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:22 compute-0 kernel: tapb37205f4-d4: entered promiscuous mode
Nov 22 08:49:22 compute-0 NetworkManager[56326]: <info>  [1763801362.4367] manager: (tapb37205f4-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 22 08:49:22 compute-0 ovn_controller[97783]: 2025-11-22T08:49:22Z|00111|binding|INFO|Claiming lport b37205f4-d490-4b94-8deb-1db878ab597a for this chassis.
Nov 22 08:49:22 compute-0 ovn_controller[97783]: 2025-11-22T08:49:22Z|00112|binding|INFO|b37205f4-d490-4b94-8deb-1db878ab597a: Claiming fa:16:3e:54:79:78 10.100.0.14
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.438 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 ovn_controller[97783]: 2025-11-22T08:49:22Z|00113|binding|INFO|Setting lport b37205f4-d490-4b94-8deb-1db878ab597a ovn-installed in OVS
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.458 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.464 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 systemd-udevd[251838]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.486 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:79:78 10.100.0.14'], port_security=['fa:16:3e:54:79:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '94198e9a-a485-4010-9e92-6132c12413f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47de2cb590748e6a379da2c77fe03df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '385e5112-f14c-413a-95f3-479f92434a93 a40a0964-d73d-40d5-afbf-df9a4cc985f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a0953bc-35ff-4d2d-896b-e32829dcd57c, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=b37205f4-d490-4b94-8deb-1db878ab597a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.488 106642 INFO neutron.agent.ovn.metadata.agent [-] Port b37205f4-d490-4b94-8deb-1db878ab597a in datapath aa8fe5d7-0d24-412a-ac01-d2a96241587e bound to our chassis
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.489 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa8fe5d7-0d24-412a-ac01-d2a96241587e
Nov 22 08:49:22 compute-0 ovn_controller[97783]: 2025-11-22T08:49:22Z|00114|binding|INFO|Setting lport b37205f4-d490-4b94-8deb-1db878ab597a up in Southbound
Nov 22 08:49:22 compute-0 systemd-machined[155703]: New machine qemu-11-instance-0000000b.
Nov 22 08:49:22 compute-0 NetworkManager[56326]: <info>  [1763801362.5028] device (tapb37205f4-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:49:22 compute-0 NetworkManager[56326]: <info>  [1763801362.5038] device (tapb37205f4-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.507 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[cca480b9-ef27-41bb-a179-032447e1211f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.508 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa8fe5d7-01 in ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:49:22 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.510 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa8fe5d7-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.511 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[15451529-ea55-449c-88e9-82fd47de79de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.512 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[44484b69-9b71-42da-b0a2-85d933105eee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.529 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf1bced-a587-4b8b-bc56-868a256165ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.558 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba86cd1-c44b-445b-ab5b-60282aa3adcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.588 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[85163c04-2de3-4f98-ba44-3bacd08423c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.595 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[d035d1f1-298f-455f-9157-39de97a67d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 NetworkManager[56326]: <info>  [1763801362.5965] manager: (tapaa8fe5d7-00): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.631 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[df021535-dfc6-46f9-94e4-872e182f83ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.634 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfb242c-a4ce-43d9-8406-26513e40cbf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 NetworkManager[56326]: <info>  [1763801362.6547] device (tapaa8fe5d7-00): carrier: link connected
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.661 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[579db132-84be-4909-8c68-dedcb0a1d89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.676 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef75e43-4ac3-49e5-9746-9f8b94877188]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa8fe5d7-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:c2:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647624, 'reachable_time': 41317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251872, 'error': None, 'target': 'ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.699 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[6204fa80-fbfc-4036-aaea-e43ea2656ffd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:c2e1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647624, 'tstamp': 647624}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251873, 'error': None, 'target': 'ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.715 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b00a0b36-6cfc-4d8f-ac7c-3ca453bc7712]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa8fe5d7-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:c2:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647624, 'reachable_time': 41317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251874, 'error': None, 'target': 'ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.746 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3fcfef-dca1-4985-8f43-53fe36fd9c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.813 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b777360e-b0f8-4090-8bc6-31d799374476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.815 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa8fe5d7-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.815 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.816 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa8fe5d7-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.818 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 kernel: tapaa8fe5d7-00: entered promiscuous mode
Nov 22 08:49:22 compute-0 NetworkManager[56326]: <info>  [1763801362.8194] manager: (tapaa8fe5d7-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.820 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.821 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa8fe5d7-00, col_values=(('external_ids', {'iface-id': '90405c2f-de13-48c0-b5df-199144f1c020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.823 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 ovn_controller[97783]: 2025-11-22T08:49:22Z|00115|binding|INFO|Releasing lport 90405c2f-de13-48c0-b5df-199144f1c020 from this chassis (sb_readonly=0)
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.826 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.827 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa8fe5d7-0d24-412a-ac01-d2a96241587e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa8fe5d7-0d24-412a-ac01-d2a96241587e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.827 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5dada298-03fd-4365-8602-26010e7fe8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.828 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-aa8fe5d7-0d24-412a-ac01-d2a96241587e
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/aa8fe5d7-0d24-412a-ac01-d2a96241587e.pid.haproxy
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID aa8fe5d7-0d24-412a-ac01-d2a96241587e
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:49:22 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:22.829 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'env', 'PROCESS_TAG=haproxy-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa8fe5d7-0d24-412a-ac01-d2a96241587e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.841 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.976 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801362.9752824, 94198e9a-a485-4010-9e92-6132c12413f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.976 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] VM Started (Lifecycle Event)
Nov 22 08:49:22 compute-0 nova_compute[189268]: 2025-11-22 08:49:22.997 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:23 compute-0 nova_compute[189268]: 2025-11-22 08:49:23.003 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801362.9755595, 94198e9a-a485-4010-9e92-6132c12413f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:49:23 compute-0 nova_compute[189268]: 2025-11-22 08:49:23.003 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] VM Paused (Lifecycle Event)
Nov 22 08:49:23 compute-0 nova_compute[189268]: 2025-11-22 08:49:23.023 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:23 compute-0 nova_compute[189268]: 2025-11-22 08:49:23.028 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:49:23 compute-0 nova_compute[189268]: 2025-11-22 08:49:23.054 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:49:23 compute-0 podman[251912]: 2025-11-22 08:49:23.23813041 +0000 UTC m=+0.044287482 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:49:23 compute-0 podman[251912]: 2025-11-22 08:49:23.47729407 +0000 UTC m=+0.283451142 container create 3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 08:49:23 compute-0 systemd[1]: Started libpod-conmon-3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90.scope.
Nov 22 08:49:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729692d432811abd74cff7983b0b60f975182c70b888bbf463952acb70267d89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:49:23 compute-0 podman[251912]: 2025-11-22 08:49:23.644011703 +0000 UTC m=+0.450168785 container init 3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:49:23 compute-0 podman[251912]: 2025-11-22 08:49:23.65136721 +0000 UTC m=+0.457524262 container start 3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:49:23 compute-0 neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251927]: [NOTICE]   (251931) : New worker (251933) forked
Nov 22 08:49:23 compute-0 neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251927]: [NOTICE]   (251931) : Loading success.
Nov 22 08:49:23 compute-0 nova_compute[189268]: 2025-11-22 08:49:23.993 189273 DEBUG nova.network.neutron [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Updated VIF entry in instance network info cache for port b37205f4-d490-4b94-8deb-1db878ab597a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:49:23 compute-0 nova_compute[189268]: 2025-11-22 08:49:23.994 189273 DEBUG nova.network.neutron [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Updating instance_info_cache with network_info: [{"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.007 189273 DEBUG oslo_concurrency.lockutils [req-993da497-fe04-4118-9771-ce2e37171a3c req-62e0ea0b-d537-461b-8e4d-82c9a96ffab1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.260 189273 DEBUG nova.compute.manager [req-ed5a7145-a7a4-40f6-a49a-b1db6eff9d6b req-dbc50fe4-be74-48b3-aa2f-9ed51eb6a94f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received event network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.261 189273 DEBUG oslo_concurrency.lockutils [req-ed5a7145-a7a4-40f6-a49a-b1db6eff9d6b req-dbc50fe4-be74-48b3-aa2f-9ed51eb6a94f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "94198e9a-a485-4010-9e92-6132c12413f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.261 189273 DEBUG oslo_concurrency.lockutils [req-ed5a7145-a7a4-40f6-a49a-b1db6eff9d6b req-dbc50fe4-be74-48b3-aa2f-9ed51eb6a94f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.261 189273 DEBUG oslo_concurrency.lockutils [req-ed5a7145-a7a4-40f6-a49a-b1db6eff9d6b req-dbc50fe4-be74-48b3-aa2f-9ed51eb6a94f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.262 189273 DEBUG nova.compute.manager [req-ed5a7145-a7a4-40f6-a49a-b1db6eff9d6b req-dbc50fe4-be74-48b3-aa2f-9ed51eb6a94f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Processing event network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.262 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.268 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801364.267989, 94198e9a-a485-4010-9e92-6132c12413f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.268 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] VM Resumed (Lifecycle Event)
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.271 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.277 189273 INFO nova.virt.libvirt.driver [-] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Instance spawned successfully.
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.277 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.296 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.303 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.346 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.347 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.347 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.348 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.348 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.348 189273 DEBUG nova.virt.libvirt.driver [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.427 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.571 189273 INFO nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Took 9.47 seconds to spawn the instance on the hypervisor.
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.572 189273 DEBUG nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.685 189273 INFO nova.compute.manager [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Took 10.10 seconds to build instance.
Nov 22 08:49:24 compute-0 nova_compute[189268]: 2025-11-22 08:49:24.708 189273 DEBUG oslo_concurrency.lockutils [None req-6df7024a-24af-4efd-9036-84982ec1768c 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:25 compute-0 podman[251942]: 2025-11-22 08:49:25.120658292 +0000 UTC m=+0.078083349 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=)
Nov 22 08:49:26 compute-0 nova_compute[189268]: 2025-11-22 08:49:26.392 189273 DEBUG nova.compute.manager [req-4002c9b7-86cd-4115-8bf0-74e6eb7a68ec req-2040af46-6ad8-4bdb-9653-54f7f7f15caa 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received event network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:26 compute-0 nova_compute[189268]: 2025-11-22 08:49:26.393 189273 DEBUG oslo_concurrency.lockutils [req-4002c9b7-86cd-4115-8bf0-74e6eb7a68ec req-2040af46-6ad8-4bdb-9653-54f7f7f15caa 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "94198e9a-a485-4010-9e92-6132c12413f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:26 compute-0 nova_compute[189268]: 2025-11-22 08:49:26.393 189273 DEBUG oslo_concurrency.lockutils [req-4002c9b7-86cd-4115-8bf0-74e6eb7a68ec req-2040af46-6ad8-4bdb-9653-54f7f7f15caa 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:26 compute-0 nova_compute[189268]: 2025-11-22 08:49:26.393 189273 DEBUG oslo_concurrency.lockutils [req-4002c9b7-86cd-4115-8bf0-74e6eb7a68ec req-2040af46-6ad8-4bdb-9653-54f7f7f15caa 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:26 compute-0 nova_compute[189268]: 2025-11-22 08:49:26.393 189273 DEBUG nova.compute.manager [req-4002c9b7-86cd-4115-8bf0-74e6eb7a68ec req-2040af46-6ad8-4bdb-9653-54f7f7f15caa 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] No waiting events found dispatching network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:49:26 compute-0 nova_compute[189268]: 2025-11-22 08:49:26.394 189273 WARNING nova.compute.manager [req-4002c9b7-86cd-4115-8bf0-74e6eb7a68ec req-2040af46-6ad8-4bdb-9653-54f7f7f15caa 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received unexpected event network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a for instance with vm_state active and task_state None.
Nov 22 08:49:26 compute-0 nova_compute[189268]: 2025-11-22 08:49:26.543 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:27 compute-0 nova_compute[189268]: 2025-11-22 08:49:27.347 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:28 compute-0 podman[251962]: 2025-11-22 08:49:28.102800668 +0000 UTC m=+0.057945898 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:49:28 compute-0 nova_compute[189268]: 2025-11-22 08:49:28.711 189273 DEBUG nova.compute.manager [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received event network-changed-b37205f4-d490-4b94-8deb-1db878ab597a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:28 compute-0 nova_compute[189268]: 2025-11-22 08:49:28.711 189273 DEBUG nova.compute.manager [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Refreshing instance network info cache due to event network-changed-b37205f4-d490-4b94-8deb-1db878ab597a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:49:28 compute-0 nova_compute[189268]: 2025-11-22 08:49:28.711 189273 DEBUG oslo_concurrency.lockutils [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:28 compute-0 nova_compute[189268]: 2025-11-22 08:49:28.711 189273 DEBUG oslo_concurrency.lockutils [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:28 compute-0 nova_compute[189268]: 2025-11-22 08:49:28.712 189273 DEBUG nova.network.neutron [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Refreshing network info cache for port b37205f4-d490-4b94-8deb-1db878ab597a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:49:29 compute-0 podman[203476]: time="2025-11-22T08:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:49:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31987 "" "Go-http-client/1.1"
Nov 22 08:49:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5732 "" "Go-http-client/1.1"
Nov 22 08:49:30 compute-0 nova_compute[189268]: 2025-11-22 08:49:30.569 189273 DEBUG nova.network.neutron [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Updated VIF entry in instance network info cache for port b37205f4-d490-4b94-8deb-1db878ab597a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:49:30 compute-0 nova_compute[189268]: 2025-11-22 08:49:30.570 189273 DEBUG nova.network.neutron [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Updating instance_info_cache with network_info: [{"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:30 compute-0 nova_compute[189268]: 2025-11-22 08:49:30.586 189273 DEBUG oslo_concurrency.lockutils [req-58fe8cfc-d0ca-43e3-903d-e964e5889389 req-05b05dbc-598c-42b8-b3dc-1309d1c6b2c6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-94198e9a-a485-4010-9e92-6132c12413f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:31 compute-0 openstack_network_exporter[205661]: ERROR   08:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:49:31 compute-0 openstack_network_exporter[205661]: ERROR   08:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:49:31 compute-0 openstack_network_exporter[205661]: ERROR   08:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:49:31 compute-0 openstack_network_exporter[205661]: ERROR   08:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:49:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:49:31 compute-0 openstack_network_exporter[205661]: ERROR   08:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:49:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:49:31 compute-0 nova_compute[189268]: 2025-11-22 08:49:31.544 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:32 compute-0 nova_compute[189268]: 2025-11-22 08:49:32.350 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:36 compute-0 nova_compute[189268]: 2025-11-22 08:49:36.548 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:37 compute-0 podman[251986]: 2025-11-22 08:49:37.153524299 +0000 UTC m=+0.103176064 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:49:37 compute-0 podman[251987]: 2025-11-22 08:49:37.161916435 +0000 UTC m=+0.104712266 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:49:37 compute-0 podman[251988]: 2025-11-22 08:49:37.165278716 +0000 UTC m=+0.100600846 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:49:37 compute-0 nova_compute[189268]: 2025-11-22 08:49:37.354 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:38 compute-0 nova_compute[189268]: 2025-11-22 08:49:38.189 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:40 compute-0 nova_compute[189268]: 2025-11-22 08:49:40.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:40 compute-0 nova_compute[189268]: 2025-11-22 08:49:40.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:40 compute-0 nova_compute[189268]: 2025-11-22 08:49:40.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.433 189273 DEBUG nova.objects.instance [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lazy-loading 'flavor' on Instance uuid 81db0af1-e2c6-4f76-a043-9d51b0431db0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.443 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.443 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.444 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.477 189273 DEBUG oslo_concurrency.lockutils [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:41 compute-0 nova_compute[189268]: 2025-11-22 08:49:41.551 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:42 compute-0 nova_compute[189268]: 2025-11-22 08:49:42.354 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:43 compute-0 nova_compute[189268]: 2025-11-22 08:49:43.912 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:43 compute-0 nova_compute[189268]: 2025-11-22 08:49:43.927 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:43 compute-0 nova_compute[189268]: 2025-11-22 08:49:43.927 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:49:43 compute-0 nova_compute[189268]: 2025-11-22 08:49:43.928 189273 DEBUG oslo_concurrency.lockutils [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:44 compute-0 nova_compute[189268]: 2025-11-22 08:49:44.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:45 compute-0 nova_compute[189268]: 2025-11-22 08:49:45.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:45 compute-0 nova_compute[189268]: 2025-11-22 08:49:45.568 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:45 compute-0 nova_compute[189268]: 2025-11-22 08:49:45.568 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:45 compute-0 nova_compute[189268]: 2025-11-22 08:49:45.569 189273 INFO nova.compute.manager [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Rebooting instance
Nov 22 08:49:45 compute-0 nova_compute[189268]: 2025-11-22 08:49:45.581 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:45 compute-0 nova_compute[189268]: 2025-11-22 08:49:45.583 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquired lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:45 compute-0 nova_compute[189268]: 2025-11-22 08:49:45.583 189273 DEBUG nova.network.neutron [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:49:46 compute-0 nova_compute[189268]: 2025-11-22 08:49:46.054 189273 DEBUG nova.network.neutron [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:49:46 compute-0 nova_compute[189268]: 2025-11-22 08:49:46.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:46 compute-0 nova_compute[189268]: 2025-11-22 08:49:46.554 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:46 compute-0 nova_compute[189268]: 2025-11-22 08:49:46.804 189273 DEBUG nova.compute.manager [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-changed-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:46 compute-0 nova_compute[189268]: 2025-11-22 08:49:46.804 189273 DEBUG nova.compute.manager [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing instance network info cache due to event network-changed-5646e04c-958a-4629-b420-730d4967f183. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:49:46 compute-0 nova_compute[189268]: 2025-11-22 08:49:46.805 189273 DEBUG oslo_concurrency.lockutils [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:47 compute-0 podman[252056]: 2025-11-22 08:49:47.119593651 +0000 UTC m=+0.072994303 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 08:49:47 compute-0 podman[252057]: 2025-11-22 08:49:47.128928852 +0000 UTC m=+0.073795155 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm)
Nov 22 08:49:47 compute-0 nova_compute[189268]: 2025-11-22 08:49:47.357 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.565 189273 DEBUG nova.network.neutron [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updating instance_info_cache with network_info: [{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.583 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Releasing lock "refresh_cache-4414e066-bc1a-4a63-b3a0-5e88f0553032" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.585 189273 DEBUG nova.compute.manager [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.813 189273 DEBUG nova.network.neutron [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:48 compute-0 kernel: tap3f5ad619-9c (unregistering): left promiscuous mode
Nov 22 08:49:48 compute-0 NetworkManager[56326]: <info>  [1763801388.8453] device (tap3f5ad619-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:49:48 compute-0 ovn_controller[97783]: 2025-11-22T08:49:48Z|00116|binding|INFO|Releasing lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 from this chassis (sb_readonly=0)
Nov 22 08:49:48 compute-0 ovn_controller[97783]: 2025-11-22T08:49:48Z|00117|binding|INFO|Setting lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 down in Southbound
Nov 22 08:49:48 compute-0 ovn_controller[97783]: 2025-11-22T08:49:48Z|00118|binding|INFO|Removing iface tap3f5ad619-9c ovn-installed in OVS
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.856 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.864 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.868 189273 DEBUG oslo_concurrency.lockutils [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.869 189273 DEBUG nova.compute.manager [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.869 189273 DEBUG nova.compute.manager [None req-83b3a93a-edce-448b-be30-f1845209ecfb d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] network_info to inject: |[{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.874 189273 DEBUG oslo_concurrency.lockutils [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.874 189273 DEBUG nova.network.neutron [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing network info cache for port 5646e04c-958a-4629-b420-730d4967f183 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:49:48 compute-0 nova_compute[189268]: 2025-11-22 08:49:48.892 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:48 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:48.909 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:63:17 10.100.0.14'], port_security=['fa:16:3e:7a:63:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4414e066-bc1a-4a63-b3a0-5e88f0553032', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8de05c82cd5c4f7bbe156c45495011c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4307701f-74fd-4973-8f0e-4204e8ea3fdd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5195068-1343-424b-8d74-4082a6f38e4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=3f5ad619-9cef-49b4-b0fd-8243d3506e32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:49:48 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:48.910 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 3f5ad619-9cef-49b4-b0fd-8243d3506e32 in datapath 3485ad45-c98a-4c02-b9a2-34cc945b16d2 unbound from our chassis
Nov 22 08:49:48 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:48.912 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3485ad45-c98a-4c02-b9a2-34cc945b16d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:49:48 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:48.915 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5b68a8-5e55-4de5-872a-69bb022b6d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:48 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:48.916 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 namespace which is not needed anymore
Nov 22 08:49:48 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 22 08:49:48 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Consumed 43.452s CPU time.
Nov 22 08:49:48 compute-0 systemd-machined[155703]: Machine qemu-9-instance-00000008 terminated.
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.007 189273 INFO nova.virt.libvirt.driver [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance destroyed successfully.
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.009 189273 DEBUG nova.objects.instance [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'resources' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.024 189273 DEBUG nova.virt.libvirt.vif [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1615837079',display_name='tempest-ServerActionsTestJSON-server-1615837079',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1615837079',id=8,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLdsHFflrgi7wGkvgkOXdCwC+kr9nW2mi1DXZmxLox1ZC0TuSJdcF2M8rMeuABQiSpoDl4gw87gDh3KsMHxzPzzF3d1/1OBKsUUK2YCN1YD+nS62FFKtRtMD4Bx9Y/yudw==',key_name='tempest-keypair-416169958',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:48:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8de05c82cd5c4f7bbe156c45495011c2',ramdisk_id='',reservation_id='r-b52qwrco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-748326472',owner_user_name='tempest-ServerActionsTestJSON-748326472-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16843c91d66144f880a31734be4d3dee',uuid=4414e066-bc1a-4a63-b3a0-5e88f0553032,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.025 189273 DEBUG nova.network.os_vif_util [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converting VIF {"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.026 189273 DEBUG nova.network.os_vif_util [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.027 189273 DEBUG os_vif [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.029 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.029 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f5ad619-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.031 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.034 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.037 189273 INFO os_vif [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c')
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.047 189273 DEBUG nova.virt.libvirt.driver [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Start _get_guest_xml network_info=[{"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.055 189273 WARNING nova.virt.libvirt.driver [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.064 189273 DEBUG nova.virt.libvirt.host [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.065 189273 DEBUG nova.virt.libvirt.host [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.071 189273 DEBUG nova.virt.libvirt.host [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.072 189273 DEBUG nova.virt.libvirt.host [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.073 189273 DEBUG nova.virt.libvirt.driver [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.074 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.075 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.076 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.077 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.077 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.078 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.078 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.079 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.079 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.080 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.080 189273 DEBUG nova.virt.hardware [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.081 189273 DEBUG nova.objects.instance [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.102 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.181 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.config --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.183 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.183 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.184 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.185 189273 DEBUG nova.virt.libvirt.vif [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1615837079',display_name='tempest-ServerActionsTestJSON-server-1615837079',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1615837079',id=8,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLdsHFflrgi7wGkvgkOXdCwC+kr9nW2mi1DXZmxLox1ZC0TuSJdcF2M8rMeuABQiSpoDl4gw87gDh3KsMHxzPzzF3d1/1OBKsUUK2YCN1YD+nS62FFKtRtMD4Bx9Y/yudw==',key_name='tempest-keypair-416169958',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:48:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8de05c82cd5c4f7bbe156c45495011c2',ramdisk_id='',reservation_id='r-b52qwrco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-748326472',owner_user_name='tempest-ServerActionsTestJSON-748326472-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16843c91d66144f880a31734be4d3dee',uuid=4414e066-bc1a-4a63-b3a0-5e88f0553032,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.186 189273 DEBUG nova.network.os_vif_util [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converting VIF {"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.188 189273 DEBUG nova.network.os_vif_util [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.189 189273 DEBUG nova.objects.instance [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:49 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[251160]: [NOTICE]   (251177) : haproxy version is 2.8.14-c23fe91
Nov 22 08:49:49 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[251160]: [NOTICE]   (251177) : path to executable is /usr/sbin/haproxy
Nov 22 08:49:49 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[251160]: [WARNING]  (251177) : Exiting Master process...
Nov 22 08:49:49 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[251160]: [ALERT]    (251177) : Current worker (251179) exited with code 143 (Terminated)
Nov 22 08:49:49 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[251160]: [WARNING]  (251177) : All workers exited. Exiting... (0)
Nov 22 08:49:49 compute-0 systemd[1]: libpod-4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e.scope: Deactivated successfully.
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.213 189273 DEBUG nova.virt.libvirt.driver [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <uuid>4414e066-bc1a-4a63-b3a0-5e88f0553032</uuid>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <name>instance-00000008</name>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <nova:name>tempest-ServerActionsTestJSON-server-1615837079</nova:name>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:49:49</nova:creationTime>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:user uuid="16843c91d66144f880a31734be4d3dee">tempest-ServerActionsTestJSON-748326472-project-member</nova:user>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:project uuid="8de05c82cd5c4f7bbe156c45495011c2">tempest-ServerActionsTestJSON-748326472</nova:project>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         <nova:port uuid="3f5ad619-9cef-49b4-b0fd-8243d3506e32">
Nov 22 08:49:49 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <system>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <entry name="serial">4414e066-bc1a-4a63-b3a0-5e88f0553032</entry>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <entry name="uuid">4414e066-bc1a-4a63-b3a0-5e88f0553032</entry>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </system>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <os>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   </os>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <features>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   </features>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.config"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:7a:63:17"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <target dev="tap3f5ad619-9c"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/console.log" append="off"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <video>
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </video>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <input type="keyboard" bus="usb"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:49:49 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:49:49 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:49:49 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:49:49 compute-0 nova_compute[189268]: </domain>
Nov 22 08:49:49 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:49:49 compute-0 podman[252131]: 2025-11-22 08:49:49.215004686 +0000 UTC m=+0.163625121 container died 4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.224 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.297 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.298 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-03ac45aed571e6cee9809f16606508038752bbce8a2db1f13c38a64182a964cf-merged.mount: Deactivated successfully.
Nov 22 08:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e-userdata-shm.mount: Deactivated successfully.
Nov 22 08:49:49 compute-0 podman[252131]: 2025-11-22 08:49:49.349903502 +0000 UTC m=+0.298523937 container cleanup 4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.380 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.382 189273 DEBUG nova.objects.instance [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:49 compute-0 systemd[1]: libpod-conmon-4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e.scope: Deactivated successfully.
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.405 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.442 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.471 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.472 189273 DEBUG nova.virt.disk.api [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Checking if we can resize image /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.473 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:49 compute-0 podman[252169]: 2025-11-22 08:49:49.503146312 +0000 UTC m=+0.072085209 container remove 4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.516 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[8555bfa5-38b7-4698-a985-42159808360c]: (4, ('Sat Nov 22 08:49:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 (4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e)\n4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e\nSat Nov 22 08:49:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 (4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e)\n4b8ce9d9a76ff91ec88923e9e0dee755bce11c23215e5b5b5bee0381cbddf28e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.518 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[35b6063a-418a-461a-a50a-d5cd83e5b602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.519 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3485ad45-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.521 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 kernel: tap3485ad45-c0: left promiscuous mode
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.525 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.528 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd3f8d8-3601-43f2-bfdd-176bf49ec7b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.548 189273 DEBUG oslo_concurrency.processutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.548 189273 DEBUG nova.virt.disk.api [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Cannot resize image /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.549 189273 DEBUG nova.objects.instance [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.551 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.552 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4c58e7-a5d9-403e-9ad2-0065fb2420d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.554 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c5a0e0-e3d5-499c-94e0-9f2ccb7aa2e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.571 189273 DEBUG nova.compute.manager [req-9433adc6-b991-4710-9206-6bbbf705483d req-12a1aa63-2c73-4e75-a44d-a78a16b1fae8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-unplugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.572 189273 DEBUG oslo_concurrency.lockutils [req-9433adc6-b991-4710-9206-6bbbf705483d req-12a1aa63-2c73-4e75-a44d-a78a16b1fae8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.573 189273 DEBUG oslo_concurrency.lockutils [req-9433adc6-b991-4710-9206-6bbbf705483d req-12a1aa63-2c73-4e75-a44d-a78a16b1fae8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.573 189273 DEBUG oslo_concurrency.lockutils [req-9433adc6-b991-4710-9206-6bbbf705483d req-12a1aa63-2c73-4e75-a44d-a78a16b1fae8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.574 189273 DEBUG nova.compute.manager [req-9433adc6-b991-4710-9206-6bbbf705483d req-12a1aa63-2c73-4e75-a44d-a78a16b1fae8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] No waiting events found dispatching network-vif-unplugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.573 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[596d61b9-1f2b-4738-aa77-313d26bd830e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641888, 'reachable_time': 29139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252190, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.574 189273 WARNING nova.compute.manager [req-9433adc6-b991-4710-9206-6bbbf705483d req-12a1aa63-2c73-4e75-a44d-a78a16b1fae8 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received unexpected event network-vif-unplugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 for instance with vm_state active and task_state reboot_started_hard.
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.578 189273 DEBUG nova.virt.libvirt.vif [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1615837079',display_name='tempest-ServerActionsTestJSON-server-1615837079',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1615837079',id=8,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLdsHFflrgi7wGkvgkOXdCwC+kr9nW2mi1DXZmxLox1ZC0TuSJdcF2M8rMeuABQiSpoDl4gw87gDh3KsMHxzPzzF3d1/1OBKsUUK2YCN1YD+nS62FFKtRtMD4Bx9Y/yudw==',key_name='tempest-keypair-416169958',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:48:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='8de05c82cd5c4f7bbe156c45495011c2',ramdisk_id='',reservation_id='r-b52qwrco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-748326472',owner_user_name='tempest-ServerActionsTestJSON-748326472-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16843c91d66144f880a31734be4d3dee',uuid=4414e066-bc1a-4a63-b3a0-5e88f0553032,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.579 189273 DEBUG nova.network.os_vif_util [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converting VIF {"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.578 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.579 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a25337-c156-4d1c-8790-df31c170ee87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d3485ad45\x2dc98a\x2d4c02\x2db9a2\x2d34cc945b16d2.mount: Deactivated successfully.
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.580 189273 DEBUG nova.network.os_vif_util [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.581 189273 DEBUG os_vif [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.582 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.583 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.584 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.588 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.589 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f5ad619-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.590 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f5ad619-9c, col_values=(('external_ids', {'iface-id': '3f5ad619-9cef-49b4-b0fd-8243d3506e32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:63:17', 'vm-uuid': '4414e066-bc1a-4a63-b3a0-5e88f0553032'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.592 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 NetworkManager[56326]: <info>  [1763801389.5935] manager: (tap3f5ad619-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.596 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.599 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.600 189273 INFO os_vif [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c')
Nov 22 08:49:49 compute-0 kernel: tap3f5ad619-9c: entered promiscuous mode
Nov 22 08:49:49 compute-0 systemd-udevd[252096]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:49:49 compute-0 NetworkManager[56326]: <info>  [1763801389.7546] manager: (tap3f5ad619-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.757 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.762 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 ovn_controller[97783]: 2025-11-22T08:49:49Z|00119|binding|INFO|Claiming lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 for this chassis.
Nov 22 08:49:49 compute-0 ovn_controller[97783]: 2025-11-22T08:49:49Z|00120|binding|INFO|3f5ad619-9cef-49b4-b0fd-8243d3506e32: Claiming fa:16:3e:7a:63:17 10.100.0.14
Nov 22 08:49:49 compute-0 NetworkManager[56326]: <info>  [1763801389.7742] device (tap3f5ad619-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:49:49 compute-0 NetworkManager[56326]: <info>  [1763801389.7750] device (tap3f5ad619-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.784 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:63:17 10.100.0.14'], port_security=['fa:16:3e:7a:63:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4414e066-bc1a-4a63-b3a0-5e88f0553032', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8de05c82cd5c4f7bbe156c45495011c2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4307701f-74fd-4973-8f0e-4204e8ea3fdd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5195068-1343-424b-8d74-4082a6f38e4c, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=3f5ad619-9cef-49b4-b0fd-8243d3506e32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.785 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 3f5ad619-9cef-49b4-b0fd-8243d3506e32 in datapath 3485ad45-c98a-4c02-b9a2-34cc945b16d2 bound to our chassis
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.788 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3485ad45-c98a-4c02-b9a2-34cc945b16d2
Nov 22 08:49:49 compute-0 ovn_controller[97783]: 2025-11-22T08:49:49Z|00121|binding|INFO|Setting lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 ovn-installed in OVS
Nov 22 08:49:49 compute-0 ovn_controller[97783]: 2025-11-22T08:49:49Z|00122|binding|INFO|Setting lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 up in Southbound
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.792 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 nova_compute[189268]: 2025-11-22 08:49:49.794 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.804 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5a136954-b991-490d-a496-6f74cb5a6051]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.806 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3485ad45-c1 in ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.808 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3485ad45-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.808 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe37e38-0ea6-4b3b-8303-fbc98ab8bf2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.810 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e7156a-f28f-4a62-9254-ac02a59bd71c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.827 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc085b1-39ae-418c-b61a-78023cb97526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 systemd-machined[155703]: New machine qemu-12-instance-00000008.
Nov 22 08:49:49 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000008.
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.865 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4e3c6e-d783-497f-b370-4820273a83ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.904 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[b050d652-9b79-4c07-a61d-37e3a09bdccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 NetworkManager[56326]: <info>  [1763801389.9159] manager: (tap3485ad45-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.911 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[75ffc397-155e-46e0-91b4-42580704ec84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.968 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[c26aa85c-d094-4c9a-8ce9-2aca71365c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:49 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:49.972 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[2aec777b-285c-4971-8d96-e051e66a38fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 NetworkManager[56326]: <info>  [1763801390.0003] device (tap3485ad45-c0): carrier: link connected
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.006 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[d084b427-3ca4-4416-b13a-9e2cfc6ec279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.026 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5cd73a-bbf4-4a03-a32b-16fa0a3aa137]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3485ad45-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:9a:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650358, 'reachable_time': 36161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252237, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.045 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[11fd8e64-993b-4648-86ac-7c393c3de3e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:9af2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650358, 'tstamp': 650358}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252238, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.064 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[72177c7d-6b4f-4a04-b247-d7be9363c8a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3485ad45-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:9a:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650358, 'reachable_time': 36161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252239, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.093 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a82ea464-8493-4010-bad6-a778bfc28b84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.151 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[9cad0529-24d2-4995-94e8-f4584af43260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.153 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3485ad45-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.153 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.154 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3485ad45-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:50 compute-0 NetworkManager[56326]: <info>  [1763801390.1580] manager: (tap3485ad45-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.156 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:50 compute-0 kernel: tap3485ad45-c0: entered promiscuous mode
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.171 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.173 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3485ad45-c0, col_values=(('external_ids', {'iface-id': '37fb22bb-e01c-451f-a2d2-26ee384f1620'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:50 compute-0 ovn_controller[97783]: 2025-11-22T08:49:50Z|00123|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.175 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.207 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.212 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3485ad45-c98a-4c02-b9a2-34cc945b16d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3485ad45-c98a-4c02-b9a2-34cc945b16d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.213 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b44628f7-6993-49ff-a641-64f6abe85634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.214 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-3485ad45-c98a-4c02-b9a2-34cc945b16d2
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/3485ad45-c98a-4c02-b9a2-34cc945b16d2.pid.haproxy
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 3485ad45-c98a-4c02-b9a2-34cc945b16d2
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:49:50 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:50.217 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'env', 'PROCESS_TAG=haproxy-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3485ad45-c98a-4c02-b9a2-34cc945b16d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.326 189273 DEBUG nova.virt.libvirt.host [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Removed pending event for 4414e066-bc1a-4a63-b3a0-5e88f0553032 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.326 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801390.3255973, 4414e066-bc1a-4a63-b3a0-5e88f0553032 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.327 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] VM Resumed (Lifecycle Event)
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.329 189273 DEBUG nova.compute.manager [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.334 189273 INFO nova.virt.libvirt.driver [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance rebooted successfully.
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.335 189273 DEBUG nova.compute.manager [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.381 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.393 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.412 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.413 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801390.3290052, 4414e066-bc1a-4a63-b3a0-5e88f0553032 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.414 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] VM Started (Lifecycle Event)
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.438 189273 DEBUG oslo_concurrency.lockutils [None req-0cd3309b-7bfa-48cb-aed8-3789a0e0625b 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.441 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.459 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:49:50 compute-0 podman[252276]: 2025-11-22 08:49:50.708254301 +0000 UTC m=+0.097310536 container create b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 08:49:50 compute-0 podman[252276]: 2025-11-22 08:49:50.64681886 +0000 UTC m=+0.035875195 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:49:50 compute-0 systemd[1]: Started libpod-conmon-b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c.scope.
Nov 22 08:49:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac6e91c93d2c9b84d1ba723cb85e2881cb7c975ccf8f9f9156364b23f390566/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:49:50 compute-0 podman[252289]: 2025-11-22 08:49:50.827902409 +0000 UTC m=+0.079063297 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 22 08:49:50 compute-0 podman[252276]: 2025-11-22 08:49:50.842231004 +0000 UTC m=+0.231287279 container init b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:49:50 compute-0 podman[252276]: 2025-11-22 08:49:50.849823908 +0000 UTC m=+0.238880153 container start b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:49:50 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [NOTICE]   (252330) : New worker (252336) forked
Nov 22 08:49:50 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [NOTICE]   (252330) : Loading success.
Nov 22 08:49:50 compute-0 podman[252290]: 2025-11-22 08:49:50.915905175 +0000 UTC m=+0.153556500 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:49:50 compute-0 nova_compute[189268]: 2025-11-22 08:49:50.975 189273 DEBUG nova.objects.instance [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lazy-loading 'flavor' on Instance uuid 81db0af1-e2c6-4f76-a043-9d51b0431db0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.000 189273 DEBUG oslo_concurrency.lockutils [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.712 189273 DEBUG nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.713 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.713 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.714 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.714 189273 DEBUG nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] No waiting events found dispatching network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.714 189273 WARNING nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received unexpected event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 for instance with vm_state active and task_state None.
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.715 189273 DEBUG nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.715 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.715 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.716 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.716 189273 DEBUG nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] No waiting events found dispatching network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.717 189273 WARNING nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received unexpected event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 for instance with vm_state active and task_state None.
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.717 189273 DEBUG nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.717 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.718 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.718 189273 DEBUG oslo_concurrency.lockutils [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.718 189273 DEBUG nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] No waiting events found dispatching network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:49:51 compute-0 nova_compute[189268]: 2025-11-22 08:49:51.719 189273 WARNING nova.compute.manager [req-8ef096b0-2687-4ad3-8176-e8a6bba65676 req-8fe3e5d5-12fa-47c7-9951-7221576abef7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received unexpected event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 for instance with vm_state active and task_state None.
Nov 22 08:49:52 compute-0 nova_compute[189268]: 2025-11-22 08:49:52.054 189273 DEBUG nova.network.neutron [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updated VIF entry in instance network info cache for port 5646e04c-958a-4629-b420-730d4967f183. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:49:52 compute-0 nova_compute[189268]: 2025-11-22 08:49:52.055 189273 DEBUG nova.network.neutron [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:52 compute-0 nova_compute[189268]: 2025-11-22 08:49:52.077 189273 DEBUG oslo_concurrency.lockutils [req-ed2ea9cf-337a-48b0-a3ba-a2c3069c6306 req-f6529ee1-8f12-499d-88b9-b61ae52ce5ed 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:52 compute-0 nova_compute[189268]: 2025-11-22 08:49:52.077 189273 DEBUG oslo_concurrency.lockutils [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:52 compute-0 nova_compute[189268]: 2025-11-22 08:49:52.361 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:53 compute-0 nova_compute[189268]: 2025-11-22 08:49:53.993 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:54 compute-0 nova_compute[189268]: 2025-11-22 08:49:54.203 189273 DEBUG nova.network.neutron [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:49:54 compute-0 nova_compute[189268]: 2025-11-22 08:49:54.395 189273 DEBUG nova.compute.manager [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-changed-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:49:54 compute-0 nova_compute[189268]: 2025-11-22 08:49:54.395 189273 DEBUG nova.compute.manager [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing instance network info cache due to event network-changed-5646e04c-958a-4629-b420-730d4967f183. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:49:54 compute-0 nova_compute[189268]: 2025-11-22 08:49:54.396 189273 DEBUG oslo_concurrency.lockutils [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:49:54 compute-0 nova_compute[189268]: 2025-11-22 08:49:54.592 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:55 compute-0 nova_compute[189268]: 2025-11-22 08:49:55.119 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:56 compute-0 podman[252348]: 2025-11-22 08:49:56.139257146 +0000 UTC m=+0.086279240 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 22 08:49:57 compute-0 nova_compute[189268]: 2025-11-22 08:49:57.270 189273 DEBUG nova.network.neutron [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:57 compute-0 nova_compute[189268]: 2025-11-22 08:49:57.362 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:57 compute-0 nova_compute[189268]: 2025-11-22 08:49:57.385 189273 DEBUG oslo_concurrency.lockutils [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:49:57 compute-0 nova_compute[189268]: 2025-11-22 08:49:57.386 189273 DEBUG nova.compute.manager [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 22 08:49:57 compute-0 nova_compute[189268]: 2025-11-22 08:49:57.387 189273 DEBUG nova.compute.manager [None req-65e11143-81c2-4d40-9e4b-8f599170a260 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] network_info to inject: |[{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 22 08:49:57 compute-0 nova_compute[189268]: 2025-11-22 08:49:57.390 189273 DEBUG oslo_concurrency.lockutils [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:49:57 compute-0 nova_compute[189268]: 2025-11-22 08:49:57.391 189273 DEBUG nova.network.neutron [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Refreshing network info cache for port 5646e04c-958a-4629-b420-730d4967f183 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.145 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.146 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.147 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.148 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.281 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:58 compute-0 podman[252368]: 2025-11-22 08:49:58.295640121 +0000 UTC m=+0.083739453 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.345 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.346 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.408 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.422 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.471 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.473 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.474 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.474 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.475 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.478 189273 INFO nova.compute.manager [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Terminating instance
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.480 189273 DEBUG nova.compute.manager [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.492 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.493 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:58 compute-0 kernel: tap5646e04c-95 (unregistering): left promiscuous mode
Nov 22 08:49:58 compute-0 NetworkManager[56326]: <info>  [1763801398.5278] device (tap5646e04c-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.550 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:58 compute-0 ovn_controller[97783]: 2025-11-22T08:49:58Z|00124|binding|INFO|Releasing lport 5646e04c-958a-4629-b420-730d4967f183 from this chassis (sb_readonly=0)
Nov 22 08:49:58 compute-0 ovn_controller[97783]: 2025-11-22T08:49:58Z|00125|binding|INFO|Setting lport 5646e04c-958a-4629-b420-730d4967f183 down in Southbound
Nov 22 08:49:58 compute-0 ovn_controller[97783]: 2025-11-22T08:49:58Z|00126|binding|INFO|Removing iface tap5646e04c-95 ovn-installed in OVS
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.562 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.566 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:c8:ca 10.100.0.9'], port_security=['fa:16:3e:45:c8:ca 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '81db0af1-e2c6-4f76-a043-9d51b0431db0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40cb6b69-21d1-494d-9388-79ae29386703', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3503f7b171c4187acaf1d66e260df45', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0a269c81-10ed-4489-b2c0-d40e635cf9cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74930d9b-5b3a-4c37-ba41-b8ad01a238b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=5646e04c-958a-4629-b420-730d4967f183) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.568 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 5646e04c-958a-4629-b420-730d4967f183 in datapath 40cb6b69-21d1-494d-9388-79ae29386703 unbound from our chassis
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.571 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40cb6b69-21d1-494d-9388-79ae29386703, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.573 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b06a0f16-2c92-4fa5-9ba2-b5f7ada19eb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.575 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703 namespace which is not needed anymore
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.580 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:58 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 22 08:49:58 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Consumed 46.697s CPU time.
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.598 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:58 compute-0 systemd-machined[155703]: Machine qemu-8-instance-00000009 terminated.
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.763 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:49:58 compute-0 neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703[251044]: [NOTICE]   (251051) : haproxy version is 2.8.14-c23fe91
Nov 22 08:49:58 compute-0 neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703[251044]: [NOTICE]   (251051) : path to executable is /usr/sbin/haproxy
Nov 22 08:49:58 compute-0 neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703[251044]: [WARNING]  (251051) : Exiting Master process...
Nov 22 08:49:58 compute-0 neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703[251044]: [ALERT]    (251051) : Current worker (251053) exited with code 143 (Terminated)
Nov 22 08:49:58 compute-0 neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703[251044]: [WARNING]  (251051) : All workers exited. Exiting... (0)
Nov 22 08:49:58 compute-0 systemd[1]: libpod-22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666.scope: Deactivated successfully.
Nov 22 08:49:58 compute-0 podman[252430]: 2025-11-22 08:49:58.788193023 +0000 UTC m=+0.084409170 container died 22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.788 189273 INFO nova.virt.libvirt.driver [-] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Instance destroyed successfully.
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.789 189273 DEBUG nova.objects.instance [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lazy-loading 'resources' on Instance uuid 81db0af1-e2c6-4f76-a043-9d51b0431db0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.807 189273 DEBUG nova.virt.libvirt.vif [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1971201621',display_name='tempest-AttachInterfacesUnderV243Test-server-1971201621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1971201621',id=9,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO0yQV+F7bUJ9i43S8GR8OAd0yxgsoOb8NPOhNiR3uK9S9NmmHM/BRImo4Z4Aq1ynKJ4PnRN3sSq5RWnN7QeY5ydkY8mnNlSZCKT98aFK5ToiaKz/eN8dHn5gNGqJOZSsw==',key_name='tempest-keypair-1162532163',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:48:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3503f7b171c4187acaf1d66e260df45',ramdisk_id='',reservation_id='r-r91c0l9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1707587668',owner_user_name='tempest-AttachInterfacesUnderV243Test-1707587668-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:49:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d19b7a27c3e74d08af788a67b85247fc',uuid=81db0af1-e2c6-4f76-a043-9d51b0431db0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.807 189273 DEBUG nova.network.os_vif_util [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Converting VIF {"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.809 189273 DEBUG nova.network.os_vif_util [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:c8:ca,bridge_name='br-int',has_traffic_filtering=True,id=5646e04c-958a-4629-b420-730d4967f183,network=Network(40cb6b69-21d1-494d-9388-79ae29386703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5646e04c-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.809 189273 DEBUG os_vif [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:c8:ca,bridge_name='br-int',has_traffic_filtering=True,id=5646e04c-958a-4629-b420-730d4967f183,network=Network(40cb6b69-21d1-494d-9388-79ae29386703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5646e04c-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.812 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.813 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5646e04c-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.815 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.816 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.820 189273 INFO os_vif [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:c8:ca,bridge_name='br-int',has_traffic_filtering=True,id=5646e04c-958a-4629-b420-730d4967f183,network=Network(40cb6b69-21d1-494d-9388-79ae29386703),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5646e04c-95')
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.821 189273 INFO nova.virt.libvirt.driver [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Deleting instance files /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0_del
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.822 189273 INFO nova.virt.libvirt.driver [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Deletion of /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0_del complete
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.844 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json" returned: 1 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.845 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] '/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk --force-share --output=json' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 08:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666-userdata-shm.mount: Deactivated successfully.
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.846 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Periodic task is updating the host stats, it is trying to get disk info for instance-00000009, but the backing disk storage was removed by a concurrent operation such as resize. Error: No disk at /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk: nova.exception.DiskNotFound: No disk at /var/lib/nova/instances/81db0af1-e2c6-4f76-a043-9d51b0431db0/disk
Nov 22 08:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-52ed8a6a4d0547e61f25b9e0344ab7ec8777f0727302d5cc56cea6782a9290c8-merged.mount: Deactivated successfully.
Nov 22 08:49:58 compute-0 podman[252430]: 2025-11-22 08:49:58.86693351 +0000 UTC m=+0.163149647 container cleanup 22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:49:58 compute-0 systemd[1]: libpod-conmon-22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666.scope: Deactivated successfully.
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.882 189273 INFO nova.compute.manager [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Took 0.40 seconds to destroy the instance on the hypervisor.
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.883 189273 DEBUG oslo.service.loopingcall [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.884 189273 DEBUG nova.compute.manager [-] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.884 189273 DEBUG nova.network.neutron [-] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:49:58 compute-0 podman[252478]: 2025-11-22 08:49:58.982691153 +0000 UTC m=+0.084738490 container remove 22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.992 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1a70af4d-3ec2-4f2f-a034-d81d8b466ad3]: (4, ('Sat Nov 22 08:49:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703 (22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666)\n22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666\nSat Nov 22 08:49:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703 (22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666)\n22c280efe9ee28c58e958e6eef33485141fa94aba15535b1badc0b7b1bcac666\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.994 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[84dea68d-40e6-4dca-b06d-d6c98f2ecb87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:58.996 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40cb6b69-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:49:58 compute-0 nova_compute[189268]: 2025-11-22 08:49:58.998 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:59 compute-0 kernel: tap40cb6b69-20: left promiscuous mode
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.014 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:49:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:59.017 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[48f16085-1315-4439-a919-f46d4106226f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:59.033 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[abecfd5d-cc40-435b-8068-f5988675b595]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:59.035 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[60416b4a-f889-4f7a-b8e0-04c760ea234e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:59.055 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[994f473d-1a7a-406b-b77a-94ebe1153937]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641696, 'reachable_time': 32640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252493, 'error': None, 'target': 'ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d40cb6b69\x2d21d1\x2d494d\x2d9388\x2d79ae29386703.mount: Deactivated successfully.
Nov 22 08:49:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:59.064 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-40cb6b69-21d1-494d-9388-79ae29386703 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:49:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:49:59.064 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[2958706c-af1a-4221-b854-20c098f4864d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.310 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.311 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4895MB free_disk=72.40150451660156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.312 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.312 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.415 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4414e066-bc1a-4a63-b3a0-5e88f0553032 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.416 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 81db0af1-e2c6-4f76-a043-9d51b0431db0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.416 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 94198e9a-a485-4010-9e92-6132c12413f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.416 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.417 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.507 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.519 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.538 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.538 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:49:59 compute-0 podman[203476]: time="2025-11-22T08:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:49:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30754 "" "Go-http-client/1.1"
Nov 22 08:49:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5269 "" "Go-http-client/1.1"
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.821 189273 DEBUG nova.network.neutron [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updated VIF entry in instance network info cache for port 5646e04c-958a-4629-b420-730d4967f183. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.822 189273 DEBUG nova.network.neutron [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [{"id": "5646e04c-958a-4629-b420-730d4967f183", "address": "fa:16:3e:45:c8:ca", "network": {"id": "40cb6b69-21d1-494d-9388-79ae29386703", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1184475015-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3503f7b171c4187acaf1d66e260df45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5646e04c-95", "ovs_interfaceid": "5646e04c-958a-4629-b420-730d4967f183", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:49:59 compute-0 nova_compute[189268]: 2025-11-22 08:49:59.840 189273 DEBUG oslo_concurrency.lockutils [req-b425835d-c3b3-401c-900d-44d2b65f9804 req-32417fb8-fec6-4206-8ee1-37217631e15f 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-81db0af1-e2c6-4f76-a043-9d51b0431db0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:50:00 compute-0 nova_compute[189268]: 2025-11-22 08:50:00.331 189273 DEBUG nova.compute.manager [req-0c904bb7-9f79-4b48-9280-013769ea2465 req-589230e1-0a19-4c48-861e-3120e8679948 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-vif-unplugged-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:50:00 compute-0 nova_compute[189268]: 2025-11-22 08:50:00.332 189273 DEBUG oslo_concurrency.lockutils [req-0c904bb7-9f79-4b48-9280-013769ea2465 req-589230e1-0a19-4c48-861e-3120e8679948 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:00 compute-0 nova_compute[189268]: 2025-11-22 08:50:00.332 189273 DEBUG oslo_concurrency.lockutils [req-0c904bb7-9f79-4b48-9280-013769ea2465 req-589230e1-0a19-4c48-861e-3120e8679948 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:00 compute-0 nova_compute[189268]: 2025-11-22 08:50:00.332 189273 DEBUG oslo_concurrency.lockutils [req-0c904bb7-9f79-4b48-9280-013769ea2465 req-589230e1-0a19-4c48-861e-3120e8679948 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:00 compute-0 nova_compute[189268]: 2025-11-22 08:50:00.332 189273 DEBUG nova.compute.manager [req-0c904bb7-9f79-4b48-9280-013769ea2465 req-589230e1-0a19-4c48-861e-3120e8679948 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] No waiting events found dispatching network-vif-unplugged-5646e04c-958a-4629-b420-730d4967f183 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:50:00 compute-0 nova_compute[189268]: 2025-11-22 08:50:00.333 189273 DEBUG nova.compute.manager [req-0c904bb7-9f79-4b48-9280-013769ea2465 req-589230e1-0a19-4c48-861e-3120e8679948 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-vif-unplugged-5646e04c-958a-4629-b420-730d4967f183 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.194 189273 DEBUG nova.network.neutron [-] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.220 189273 INFO nova.compute.manager [-] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Took 2.34 seconds to deallocate network for instance.
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.265 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.266 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:01 compute-0 ovn_controller[97783]: 2025-11-22T08:50:01Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:79:78 10.100.0.14
Nov 22 08:50:01 compute-0 ovn_controller[97783]: 2025-11-22T08:50:01Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:79:78 10.100.0.14
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.368 189273 DEBUG nova.compute.provider_tree [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.379 189273 DEBUG nova.scheduler.client.report [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.397 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:01 compute-0 openstack_network_exporter[205661]: ERROR   08:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:50:01 compute-0 openstack_network_exporter[205661]: ERROR   08:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:50:01 compute-0 openstack_network_exporter[205661]: ERROR   08:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:50:01 compute-0 openstack_network_exporter[205661]: ERROR   08:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:50:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:50:01 compute-0 openstack_network_exporter[205661]: ERROR   08:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:50:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.454 189273 INFO nova.scheduler.client.report [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Deleted allocations for instance 81db0af1-e2c6-4f76-a043-9d51b0431db0
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.532 189273 DEBUG oslo_concurrency.lockutils [None req-ee768403-1758-4749-bf8c-bd31533f9245 d19b7a27c3e74d08af788a67b85247fc a3503f7b171c4187acaf1d66e260df45 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.557 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:01 compute-0 nova_compute[189268]: 2025-11-22 08:50:01.763 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.366 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.565 189273 DEBUG nova.compute.manager [req-a25474e5-1b64-47c7-9af5-73f65279057f req-40e80142-5a59-46a4-9683-647a2ec1ca91 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.565 189273 DEBUG oslo_concurrency.lockutils [req-a25474e5-1b64-47c7-9af5-73f65279057f req-40e80142-5a59-46a4-9683-647a2ec1ca91 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.566 189273 DEBUG oslo_concurrency.lockutils [req-a25474e5-1b64-47c7-9af5-73f65279057f req-40e80142-5a59-46a4-9683-647a2ec1ca91 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.566 189273 DEBUG oslo_concurrency.lockutils [req-a25474e5-1b64-47c7-9af5-73f65279057f req-40e80142-5a59-46a4-9683-647a2ec1ca91 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "81db0af1-e2c6-4f76-a043-9d51b0431db0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.566 189273 DEBUG nova.compute.manager [req-a25474e5-1b64-47c7-9af5-73f65279057f req-40e80142-5a59-46a4-9683-647a2ec1ca91 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] No waiting events found dispatching network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.567 189273 WARNING nova.compute.manager [req-a25474e5-1b64-47c7-9af5-73f65279057f req-40e80142-5a59-46a4-9683-647a2ec1ca91 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received unexpected event network-vif-plugged-5646e04c-958a-4629-b420-730d4967f183 for instance with vm_state deleted and task_state None.
Nov 22 08:50:02 compute-0 nova_compute[189268]: 2025-11-22 08:50:02.567 189273 DEBUG nova.compute.manager [req-a25474e5-1b64-47c7-9af5-73f65279057f req-40e80142-5a59-46a4-9683-647a2ec1ca91 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Received event network-vif-deleted-5646e04c-958a-4629-b420-730d4967f183 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:50:03 compute-0 nova_compute[189268]: 2025-11-22 08:50:03.818 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:07 compute-0 nova_compute[189268]: 2025-11-22 08:50:07.369 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:08 compute-0 podman[252509]: 2025-11-22 08:50:08.123808715 +0000 UTC m=+0.066518130 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:50:08 compute-0 podman[252508]: 2025-11-22 08:50:08.138805628 +0000 UTC m=+0.082834968 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 08:50:08 compute-0 podman[252510]: 2025-11-22 08:50:08.139207089 +0000 UTC m=+0.075990435 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:50:08 compute-0 nova_compute[189268]: 2025-11-22 08:50:08.820 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:09.992 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:09.992 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:09.993 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:12 compute-0 nova_compute[189268]: 2025-11-22 08:50:12.370 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:13 compute-0 nova_compute[189268]: 2025-11-22 08:50:13.760 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801398.758345, 81db0af1-e2c6-4f76-a043-9d51b0431db0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:50:13 compute-0 nova_compute[189268]: 2025-11-22 08:50:13.762 189273 INFO nova.compute.manager [-] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] VM Stopped (Lifecycle Event)
Nov 22 08:50:13 compute-0 nova_compute[189268]: 2025-11-22 08:50:13.786 189273 DEBUG nova.compute.manager [None req-f7d388dd-87ea-4a29-97bb-233c731adbcc - - - - - -] [instance: 81db0af1-e2c6-4f76-a043-9d51b0431db0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:50:13 compute-0 nova_compute[189268]: 2025-11-22 08:50:13.824 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:17 compute-0 nova_compute[189268]: 2025-11-22 08:50:17.374 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:18 compute-0 podman[252566]: 2025-11-22 08:50:18.147299799 +0000 UTC m=+0.092182370 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:50:18 compute-0 podman[252565]: 2025-11-22 08:50:18.158419379 +0000 UTC m=+0.098318956 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:50:18 compute-0 nova_compute[189268]: 2025-11-22 08:50:18.828 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:21 compute-0 podman[252602]: 2025-11-22 08:50:21.154669703 +0000 UTC m=+0.100768440 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git)
Nov 22 08:50:21 compute-0 podman[252603]: 2025-11-22 08:50:21.208047659 +0000 UTC m=+0.146910031 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.095 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.096 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.096 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.103 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4414e066-bc1a-4a63-b3a0-5e88f0553032', 'name': 'tempest-ServerActionsTestJSON-server-1615837079', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8de05c82cd5c4f7bbe156c45495011c2', 'user_id': '16843c91d66144f880a31734be4d3dee', 'hostId': 'cb497ba1e773e2e6462feb93636d252fa5d5837a65e831f3361fe641', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.105 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 94198e9a-a485-4010-9e92-6132c12413f2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:50:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:22.106 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/94198e9a-a485-4010-9e92-6132c12413f2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:50:22 compute-0 nova_compute[189268]: 2025-11-22 08:50:22.375 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:23 compute-0 nova_compute[189268]: 2025-11-22 08:50:23.832 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:27 compute-0 podman[252658]: 2025-11-22 08:50:27.137179256 +0000 UTC m=+0.088886931 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Nov 22 08:50:27 compute-0 nova_compute[189268]: 2025-11-22 08:50:27.379 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:28 compute-0 nova_compute[189268]: 2025-11-22 08:50:28.837 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:29 compute-0 podman[252678]: 2025-11-22 08:50:29.246206477 +0000 UTC m=+0.195233830 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:50:29 compute-0 podman[203476]: time="2025-11-22T08:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:50:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30754 "" "Go-http-client/1.1"
Nov 22 08:50:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5267 "" "Go-http-client/1.1"
Nov 22 08:50:30 compute-0 nova_compute[189268]: 2025-11-22 08:50:30.360 189273 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.87 sec
Nov 22 08:50:31 compute-0 openstack_network_exporter[205661]: ERROR   08:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:50:31 compute-0 openstack_network_exporter[205661]: ERROR   08:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:50:31 compute-0 openstack_network_exporter[205661]: ERROR   08:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:50:31 compute-0 openstack_network_exporter[205661]: ERROR   08:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:50:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:50:31 compute-0 openstack_network_exporter[205661]: ERROR   08:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:50:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:50:31 compute-0 ovn_controller[97783]: 2025-11-22T08:50:31Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:63:17 10.100.0.14
Nov 22 08:50:32 compute-0 nova_compute[189268]: 2025-11-22 08:50:32.381 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:32 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:32.808 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:50:32 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:32.809 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:50:32 compute-0 nova_compute[189268]: 2025-11-22 08:50:32.811 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.135 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2083 Content-Type: application/json Date: Sat, 22 Nov 2025 08:50:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-9f1ebea9-10fd-40e9-a75b-04f6841e5e30 x-openstack-request-id: req-9f1ebea9-10fd-40e9-a75b-04f6841e5e30 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.136 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "94198e9a-a485-4010-9e92-6132c12413f2", "name": "tempest-TestServerBasicOps-server-997365885", "status": "ACTIVE", "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "user_id": "056ede5a6ff04739bec29b1558f65499", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "08a8937e5e3e36ebb30170b01d66986d71603737eb9a999daf198975", "image": {"id": "ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:49:13Z", "updated": "2025-11-22T08:49:24Z", "addresses": {"tempest-TestServerBasicOps-2020107474-network": [{"version": 4, "addr": "10.100.0.14", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:54:79:78"}, {"version": 4, "addr": "192.168.122.246", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:54:79:78"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/94198e9a-a485-4010-9e92-6132c12413f2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/94198e9a-a485-4010-9e92-6132c12413f2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-190901822", "OS-SRV-USG:launched_at": "2025-11-22T08:49:24.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-383310095"}, {"name": "tempest-securitygroup--1449063763"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.136 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/94198e9a-a485-4010-9e92-6132c12413f2 used request id req-9f1ebea9-10fd-40e9-a75b-04f6841e5e30 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.137 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '94198e9a-a485-4010-9e92-6132c12413f2', 'name': 'tempest-TestServerBasicOps-server-997365885', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c47de2cb590748e6a379da2c77fe03df', 'user_id': '056ede5a6ff04739bec29b1558f65499', 'hostId': '08a8937e5e3e36ebb30170b01d66986d71603737eb9a999daf198975', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.137 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.137 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.138 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.138 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:50:33.138104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.144 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.bytes volume: 1431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.148 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 94198e9a-a485-4010-9e92-6132c12413f2 / tapb37205f4-d4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.148 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.149 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.149 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.149 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.149 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.149 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.149 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.149 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.150 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.151 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.151 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.151 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.151 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.151 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.151 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.151 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.152 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:50:33.149651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:50:33.150865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:50:33.151906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:50:33.152966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.184 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/cpu volume: 36310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.210 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/cpu volume: 36540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.211 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.211 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.211 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.211 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.211 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.212 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.212 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:50:33.211858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.213 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.213 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.213 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.213 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:50:33.213642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.234 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.235 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.250 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.251 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.251 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.252 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.252 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.252 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.252 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.252 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:50:33.252582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.294 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.bytes volume: 32032768 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.295 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.340 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.read.bytes volume: 31566336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.341 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.341 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.341 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.341 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.341 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.341 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.341 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.342 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.342 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.342 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.342 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.343 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.343 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.343 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.343 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.344 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.344 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:50:33.341945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.344 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.bytes volume: 900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.344 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.344 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:50:33.344090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.345 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-997365885>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-997365885>]
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:50:33.345660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.latency volume: 2638490755 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.346 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.latency volume: 240205122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.347 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.read.latency volume: 1294697797 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.347 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.read.latency volume: 79170885 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.347 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.348 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.348 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.348 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.348 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.348 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.348 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.bytes.delta volume: 1341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.348 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.requests volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:50:33.346617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.350 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.350 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.write.requests volume: 313 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.350 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.350 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:50:33.348392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:50:33.349800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.351 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.requests volume: 1211 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.352 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.352 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.read.requests volume: 1155 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.352 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.352 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.352 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:50:33.351718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:50:33.353419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.353 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.354 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.355 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.bytes volume: 147456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.355 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.355 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.write.bytes volume: 72957952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.355 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.355 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:50:33.354906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.356 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:50:33.356626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.357 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.latency volume: 822188105 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.358 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.358 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.write.latency volume: 6013349591 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.358 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.358 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.358 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:50:33.357757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.359 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.360 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.360 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.360 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.360 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.360 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.360 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.360 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:50:33.359308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:50:33.360175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.361 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.362 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.362 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.362 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:50:33.361359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.362 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.362 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.362 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/network.outgoing.bytes.delta volume: 900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.363 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:50:33.362306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:50:33.363368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-997365885>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-997365885>]
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:50:33.364491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 DEBUG ceilometer.compute.pollsters [-] 4414e066-bc1a-4a63-b3a0-5e88f0553032/memory.usage volume: 42.83203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 DEBUG ceilometer.compute.pollsters [-] 94198e9a-a485-4010-9e92-6132c12413f2/memory.usage volume: 42.8359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.365 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:50:33.365261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:50:33.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:50:33 compute-0 nova_compute[189268]: 2025-11-22 08:50:33.839 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:34 compute-0 nova_compute[189268]: 2025-11-22 08:50:34.757 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:35.491 106749 DEBUG eventlet.wsgi.server [-] (106749) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:35.492 106749 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: Accept: */*
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: Connection: close
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: Content-Type: text/plain
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: Host: 169.254.169.254
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: User-Agent: curl/7.84.0
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: X-Forwarded-For: 10.100.0.14
Nov 22 08:50:35 compute-0 ovn_metadata_agent[106637]: X-Ovn-Network-Id: aa8fe5d7-0d24-412a-ac01-d2a96241587e __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 22 08:50:37 compute-0 nova_compute[189268]: 2025-11-22 08:50:37.384 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:38.024 106749 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:38.026 106749 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.5345371
Nov 22 08:50:38 compute-0 haproxy-metadata-proxy-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251933]: 10.100.0.14:55538 [22/Nov/2025:08:50:35.490] listener listener/metadata 0/0/0/2536/2536 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:38.111 106749 DEBUG eventlet.wsgi.server [-] (106749) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:38.112 106749 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: Accept: */*
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: Connection: close
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: Content-Length: 100
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: Content-Type: application/x-www-form-urlencoded
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: Host: 169.254.169.254
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: User-Agent: curl/7.84.0
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: X-Forwarded-For: 10.100.0.14
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: X-Ovn-Network-Id: aa8fe5d7-0d24-412a-ac01-d2a96241587e
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:38.605 106749 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:38.606 106749 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.4942238
Nov 22 08:50:38 compute-0 haproxy-metadata-proxy-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251933]: 10.100.0.14:55892 [22/Nov/2025:08:50:38.109] listener listener/metadata 0/0/0/496/496 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 22 08:50:38 compute-0 nova_compute[189268]: 2025-11-22 08:50:38.641 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:38 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:38.810 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:50:38 compute-0 nova_compute[189268]: 2025-11-22 08:50:38.842 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:39 compute-0 podman[252702]: 2025-11-22 08:50:39.126327052 +0000 UTC m=+0.080066784 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:50:39 compute-0 podman[252703]: 2025-11-22 08:50:39.139696322 +0000 UTC m=+0.087841503 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:50:39 compute-0 podman[252704]: 2025-11-22 08:50:39.143156334 +0000 UTC m=+0.082781776 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 22 08:50:39 compute-0 nova_compute[189268]: 2025-11-22 08:50:39.536 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.160 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.599 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "94198e9a-a485-4010-9e92-6132c12413f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.599 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.600 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "94198e9a-a485-4010-9e92-6132c12413f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.600 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.600 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.602 189273 INFO nova.compute.manager [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Terminating instance
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.603 189273 DEBUG nova.compute.manager [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:50:41 compute-0 ovn_controller[97783]: 2025-11-22T08:50:41Z|00127|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:50:41 compute-0 ovn_controller[97783]: 2025-11-22T08:50:41Z|00128|binding|INFO|Releasing lport 90405c2f-de13-48c0-b5df-199144f1c020 from this chassis (sb_readonly=0)
Nov 22 08:50:41 compute-0 kernel: tapb37205f4-d4 (unregistering): left promiscuous mode
Nov 22 08:50:41 compute-0 NetworkManager[56326]: <info>  [1763801441.6476] device (tapb37205f4-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:50:41 compute-0 ovn_controller[97783]: 2025-11-22T08:50:41Z|00129|binding|INFO|Releasing lport b37205f4-d490-4b94-8deb-1db878ab597a from this chassis (sb_readonly=0)
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.690 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 ovn_controller[97783]: 2025-11-22T08:50:41Z|00130|binding|INFO|Setting lport b37205f4-d490-4b94-8deb-1db878ab597a down in Southbound
Nov 22 08:50:41 compute-0 ovn_controller[97783]: 2025-11-22T08:50:41Z|00131|binding|INFO|Removing iface tapb37205f4-d4 ovn-installed in OVS
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.694 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.757 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.767 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 22 08:50:41 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 44.034s CPU time.
Nov 22 08:50:41 compute-0 systemd-machined[155703]: Machine qemu-11-instance-0000000b terminated.
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.828 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.834 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.873 189273 INFO nova.virt.libvirt.driver [-] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Instance destroyed successfully.
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.874 189273 DEBUG nova.objects.instance [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lazy-loading 'resources' on Instance uuid 94198e9a-a485-4010-9e92-6132c12413f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.892 189273 DEBUG nova.virt.libvirt.vif [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:49:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-997365885',display_name='tempest-TestServerBasicOps-server-997365885',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-997365885',id=11,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLcYvGNYl23mihoJLTfvE0IiYo3x2gxuZswcDN3C9+US21VTdIP/lsfNQ9GDLAttRATHAuOf6pUBP+qoE3j4vwOTOhZLaw5In/EmWAhgL9G+Ls4Z8R14o3Gu6x4a5/U0tA==',key_name='tempest-TestServerBasicOps-190901822',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:49:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c47de2cb590748e6a379da2c77fe03df',ramdisk_id='',reservation_id='r-0twfn3s0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-685598289',owner_user_name='tempest-TestServerBasicOps-685598289-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:50:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='056ede5a6ff04739bec29b1558f65499',uuid=94198e9a-a485-4010-9e92-6132c12413f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.893 189273 DEBUG nova.network.os_vif_util [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Converting VIF {"id": "b37205f4-d490-4b94-8deb-1db878ab597a", "address": "fa:16:3e:54:79:78", "network": {"id": "aa8fe5d7-0d24-412a-ac01-d2a96241587e", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2020107474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c47de2cb590748e6a379da2c77fe03df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb37205f4-d4", "ovs_interfaceid": "b37205f4-d490-4b94-8deb-1db878ab597a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.893 189273 DEBUG nova.network.os_vif_util [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:79:78,bridge_name='br-int',has_traffic_filtering=True,id=b37205f4-d490-4b94-8deb-1db878ab597a,network=Network(aa8fe5d7-0d24-412a-ac01-d2a96241587e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb37205f4-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.894 189273 DEBUG os_vif [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:79:78,bridge_name='br-int',has_traffic_filtering=True,id=b37205f4-d490-4b94-8deb-1db878ab597a,network=Network(aa8fe5d7-0d24-412a-ac01-d2a96241587e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb37205f4-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.896 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.896 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb37205f4-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.898 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.899 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.905 189273 INFO os_vif [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:79:78,bridge_name='br-int',has_traffic_filtering=True,id=b37205f4-d490-4b94-8deb-1db878ab597a,network=Network(aa8fe5d7-0d24-412a-ac01-d2a96241587e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb37205f4-d4')
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.906 189273 INFO nova.virt.libvirt.driver [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Deleting instance files /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2_del
Nov 22 08:50:41 compute-0 nova_compute[189268]: 2025-11-22 08:50:41.907 189273 INFO nova.virt.libvirt.driver [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Deletion of /var/lib/nova/instances/94198e9a-a485-4010-9e92-6132c12413f2_del complete
Nov 22 08:50:42 compute-0 nova_compute[189268]: 2025-11-22 08:50:42.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:42 compute-0 nova_compute[189268]: 2025-11-22 08:50:42.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:50:42 compute-0 nova_compute[189268]: 2025-11-22 08:50:42.387 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:42 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:42.534 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:79:78 10.100.0.14'], port_security=['fa:16:3e:54:79:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '94198e9a-a485-4010-9e92-6132c12413f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c47de2cb590748e6a379da2c77fe03df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '385e5112-f14c-413a-95f3-479f92434a93 a40a0964-d73d-40d5-afbf-df9a4cc985f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a0953bc-35ff-4d2d-896b-e32829dcd57c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=b37205f4-d490-4b94-8deb-1db878ab597a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:50:42 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:42.535 106642 INFO neutron.agent.ovn.metadata.agent [-] Port b37205f4-d490-4b94-8deb-1db878ab597a in datapath aa8fe5d7-0d24-412a-ac01-d2a96241587e unbound from our chassis
Nov 22 08:50:42 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:42.537 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa8fe5d7-0d24-412a-ac01-d2a96241587e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:50:42 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:42.539 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd8a384-8918-470e-ae21-63b73a955483]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:42 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:42.539 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e namespace which is not needed anymore
Nov 22 08:50:42 compute-0 nova_compute[189268]: 2025-11-22 08:50:42.562 189273 INFO nova.compute.manager [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Took 0.96 seconds to destroy the instance on the hypervisor.
Nov 22 08:50:42 compute-0 nova_compute[189268]: 2025-11-22 08:50:42.563 189273 DEBUG oslo.service.loopingcall [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:50:42 compute-0 nova_compute[189268]: 2025-11-22 08:50:42.563 189273 DEBUG nova.compute.manager [-] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:50:42 compute-0 nova_compute[189268]: 2025-11-22 08:50:42.563 189273 DEBUG nova.network.neutron [-] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:50:42 compute-0 neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251927]: [NOTICE]   (251931) : haproxy version is 2.8.14-c23fe91
Nov 22 08:50:42 compute-0 neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251927]: [NOTICE]   (251931) : path to executable is /usr/sbin/haproxy
Nov 22 08:50:42 compute-0 neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251927]: [WARNING]  (251931) : Exiting Master process...
Nov 22 08:50:42 compute-0 neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251927]: [ALERT]    (251931) : Current worker (251933) exited with code 143 (Terminated)
Nov 22 08:50:42 compute-0 neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e[251927]: [WARNING]  (251931) : All workers exited. Exiting... (0)
Nov 22 08:50:42 compute-0 systemd[1]: libpod-3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90.scope: Deactivated successfully.
Nov 22 08:50:42 compute-0 podman[252800]: 2025-11-22 08:50:42.995702972 +0000 UTC m=+0.309816562 container died 3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90-userdata-shm.mount: Deactivated successfully.
Nov 22 08:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-729692d432811abd74cff7983b0b60f975182c70b888bbf463952acb70267d89-merged.mount: Deactivated successfully.
Nov 22 08:50:43 compute-0 podman[252800]: 2025-11-22 08:50:43.769137476 +0000 UTC m=+1.083251106 container cleanup 3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 08:50:43 compute-0 systemd[1]: libpod-conmon-3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90.scope: Deactivated successfully.
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.073 189273 DEBUG nova.compute.manager [req-082bd13b-24b9-42c8-aadf-01ddae1c7547 req-b87659f2-5b1f-4079-8856-44a80259d098 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received event network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.073 189273 DEBUG oslo_concurrency.lockutils [req-082bd13b-24b9-42c8-aadf-01ddae1c7547 req-b87659f2-5b1f-4079-8856-44a80259d098 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "94198e9a-a485-4010-9e92-6132c12413f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.074 189273 DEBUG oslo_concurrency.lockutils [req-082bd13b-24b9-42c8-aadf-01ddae1c7547 req-b87659f2-5b1f-4079-8856-44a80259d098 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.074 189273 DEBUG oslo_concurrency.lockutils [req-082bd13b-24b9-42c8-aadf-01ddae1c7547 req-b87659f2-5b1f-4079-8856-44a80259d098 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.074 189273 DEBUG nova.compute.manager [req-082bd13b-24b9-42c8-aadf-01ddae1c7547 req-b87659f2-5b1f-4079-8856-44a80259d098 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] No waiting events found dispatching network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.075 189273 WARNING nova.compute.manager [req-082bd13b-24b9-42c8-aadf-01ddae1c7547 req-b87659f2-5b1f-4079-8856-44a80259d098 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received unexpected event network-vif-plugged-b37205f4-d490-4b94-8deb-1db878ab597a for instance with vm_state active and task_state deleting.
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.264 189273 DEBUG nova.network.neutron [-] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:50:44 compute-0 podman[252829]: 2025-11-22 08:50:44.371803098 +0000 UTC m=+0.570173180 container remove 3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.380 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c0bc64-dfb5-45ec-b6b9-f9e5bdc61dfc]: (4, ('Sat Nov 22 08:50:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e (3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90)\n3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90\nSat Nov 22 08:50:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e (3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90)\n3a2373900e183b39499d0f57566a896d3fefa7c5be0d8180a27d690f11dd2e90\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.383 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[d5825de7-e284-46ca-bcf5-d540e68ca5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.385 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa8fe5d7-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.389 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:44 compute-0 kernel: tapaa8fe5d7-00: left promiscuous mode
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.420 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.425 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[8055debc-6217-482a-ada7-631a9cc4f5b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.440 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[86855573-d5bc-4017-8d17-88488ad2d0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.443 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[742f582b-ac41-434d-b1b4-46e3ff60a63b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.463 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[8e16f2e6-8efd-4534-ae66-f1ac6f8c2482]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647616, 'reachable_time': 41901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252843, 'error': None, 'target': 'ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.466 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa8fe5d7-0d24-412a-ac01-d2a96241587e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:50:44 compute-0 systemd[1]: run-netns-ovnmeta\x2daa8fe5d7\x2d0d24\x2d412a\x2dac01\x2dd2a96241587e.mount: Deactivated successfully.
Nov 22 08:50:44 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:44.467 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[2627ca8f-41a4-400e-9c2e-3c827fbc1e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.497 189273 INFO nova.compute.manager [-] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Took 1.93 seconds to deallocate network for instance.
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.634 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.635 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.731 189273 DEBUG nova.compute.provider_tree [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.752 189273 DEBUG nova.scheduler.client.report [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.799 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:44 compute-0 nova_compute[189268]: 2025-11-22 08:50:44.881 189273 INFO nova.scheduler.client.report [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Deleted allocations for instance 94198e9a-a485-4010-9e92-6132c12413f2
Nov 22 08:50:45 compute-0 nova_compute[189268]: 2025-11-22 08:50:45.015 189273 DEBUG oslo_concurrency.lockutils [None req-389039e3-8b7d-4205-8e49-dd784b9985f9 056ede5a6ff04739bec29b1558f65499 c47de2cb590748e6a379da2c77fe03df - - default default] Lock "94198e9a-a485-4010-9e92-6132c12413f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:45 compute-0 nova_compute[189268]: 2025-11-22 08:50:45.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:46 compute-0 nova_compute[189268]: 2025-11-22 08:50:46.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:46 compute-0 nova_compute[189268]: 2025-11-22 08:50:46.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:46 compute-0 nova_compute[189268]: 2025-11-22 08:50:46.170 189273 DEBUG nova.compute.manager [req-3acde745-dd69-4f57-9e3a-883722db47f6 req-9916f9b5-f284-4b26-9ab9-27cbec78289a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Received event network-vif-deleted-b37205f4-d490-4b94-8deb-1db878ab597a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:50:46 compute-0 nova_compute[189268]: 2025-11-22 08:50:46.901 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:47 compute-0 nova_compute[189268]: 2025-11-22 08:50:47.390 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:48 compute-0 ovn_controller[97783]: 2025-11-22T08:50:48Z|00132|binding|INFO|Releasing lport 37fb22bb-e01c-451f-a2d2-26ee384f1620 from this chassis (sb_readonly=0)
Nov 22 08:50:48 compute-0 nova_compute[189268]: 2025-11-22 08:50:48.985 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:49 compute-0 nova_compute[189268]: 2025-11-22 08:50:49.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:49 compute-0 podman[252845]: 2025-11-22 08:50:49.122175442 +0000 UTC m=+0.074837512 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS)
Nov 22 08:50:49 compute-0 podman[252844]: 2025-11-22 08:50:49.14440523 +0000 UTC m=+0.100903414 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 22 08:50:51 compute-0 nova_compute[189268]: 2025-11-22 08:50:51.904 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:52 compute-0 nova_compute[189268]: 2025-11-22 08:50:52.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:52 compute-0 podman[252883]: 2025-11-22 08:50:52.122708922 +0000 UTC m=+0.078089291 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Nov 22 08:50:52 compute-0 podman[252884]: 2025-11-22 08:50:52.168622227 +0000 UTC m=+0.119797683 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:50:52 compute-0 nova_compute[189268]: 2025-11-22 08:50:52.393 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.113 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.126 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.805 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.806 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.837 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.980 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.982 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.990 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:50:53 compute-0 nova_compute[189268]: 2025-11-22 08:50:53.991 189273 INFO nova.compute.claims [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.116 189273 DEBUG nova.scheduler.client.report [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.203 189273 DEBUG nova.scheduler.client.report [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.204 189273 DEBUG nova.compute.provider_tree [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.223 189273 DEBUG nova.scheduler.client.report [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.248 189273 DEBUG nova.scheduler.client.report [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.327 189273 DEBUG nova.compute.provider_tree [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.340 189273 DEBUG nova.scheduler.client.report [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.371 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.372 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.415 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.416 189273 DEBUG nova.network.neutron [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.444 189273 INFO nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.459 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.566 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.567 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.568 189273 INFO nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Creating image(s)
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.568 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "/var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.569 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "/var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.569 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "/var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.583 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.654 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.655 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.656 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.669 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.726 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.728 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.796 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk 1073741824" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.797 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.798 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.860 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.862 189273 DEBUG nova.virt.disk.api [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Checking if we can resize image /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.862 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.924 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.925 189273 DEBUG nova.virt.disk.api [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Cannot resize image /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.925 189273 DEBUG nova.objects.instance [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lazy-loading 'migration_context' on Instance uuid 38817707-1f5a-4596-bfd2-b48048331de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.960 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.961 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Ensure instance console log exists: /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.962 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.962 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:54 compute-0 nova_compute[189268]: 2025-11-22 08:50:54.963 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:55 compute-0 nova_compute[189268]: 2025-11-22 08:50:55.269 189273 DEBUG nova.policy [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '584cc3e3a5224a2e9a08273882841998', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b97da7a1b46046e59c36f5af412de432', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:50:56 compute-0 nova_compute[189268]: 2025-11-22 08:50:56.871 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801441.8691401, 94198e9a-a485-4010-9e92-6132c12413f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:50:56 compute-0 nova_compute[189268]: 2025-11-22 08:50:56.871 189273 INFO nova.compute.manager [-] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] VM Stopped (Lifecycle Event)
Nov 22 08:50:56 compute-0 nova_compute[189268]: 2025-11-22 08:50:56.891 189273 DEBUG nova.compute.manager [None req-e1a159ad-14e0-4454-87eb-c20897dd92b0 - - - - - -] [instance: 94198e9a-a485-4010-9e92-6132c12413f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:50:56 compute-0 nova_compute[189268]: 2025-11-22 08:50:56.911 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.396 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.587 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.588 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.588 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.589 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.589 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.590 189273 INFO nova.compute.manager [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Terminating instance
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.591 189273 DEBUG nova.compute.manager [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.694 189273 DEBUG nova.network.neutron [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Successfully created port: 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:50:57 compute-0 kernel: tap3f5ad619-9c (unregistering): left promiscuous mode
Nov 22 08:50:57 compute-0 NetworkManager[56326]: <info>  [1763801457.7725] device (tap3f5ad619-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.787 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:57 compute-0 ovn_controller[97783]: 2025-11-22T08:50:57Z|00133|binding|INFO|Releasing lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 from this chassis (sb_readonly=0)
Nov 22 08:50:57 compute-0 ovn_controller[97783]: 2025-11-22T08:50:57Z|00134|binding|INFO|Setting lport 3f5ad619-9cef-49b4-b0fd-8243d3506e32 down in Southbound
Nov 22 08:50:57 compute-0 ovn_controller[97783]: 2025-11-22T08:50:57Z|00135|binding|INFO|Removing iface tap3f5ad619-9c ovn-installed in OVS
Nov 22 08:50:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:57.807 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:63:17 10.100.0.14'], port_security=['fa:16:3e:7a:63:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4414e066-bc1a-4a63-b3a0-5e88f0553032', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8de05c82cd5c4f7bbe156c45495011c2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4307701f-74fd-4973-8f0e-4204e8ea3fdd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5195068-1343-424b-8d74-4082a6f38e4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=3f5ad619-9cef-49b4-b0fd-8243d3506e32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:50:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:57.808 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 3f5ad619-9cef-49b4-b0fd-8243d3506e32 in datapath 3485ad45-c98a-4c02-b9a2-34cc945b16d2 unbound from our chassis
Nov 22 08:50:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:57.810 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3485ad45-c98a-4c02-b9a2-34cc945b16d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:50:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:57.812 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[c255fdfc-870f-42ef-9e23-f5233bf6dc6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:57 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:57.813 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 namespace which is not needed anymore
Nov 22 08:50:57 compute-0 nova_compute[189268]: 2025-11-22 08:50:57.815 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:57 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 22 08:50:57 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000008.scope: Consumed 45.343s CPU time.
Nov 22 08:50:57 compute-0 systemd-machined[155703]: Machine qemu-12-instance-00000008 terminated.
Nov 22 08:50:57 compute-0 podman[252945]: 2025-11-22 08:50:57.8777194 +0000 UTC m=+0.076971391 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.057 189273 INFO nova.virt.libvirt.driver [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Instance destroyed successfully.
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.057 189273 DEBUG nova.objects.instance [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lazy-loading 'resources' on Instance uuid 4414e066-bc1a-4a63-b3a0-5e88f0553032 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:50:58 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [NOTICE]   (252330) : haproxy version is 2.8.14-c23fe91
Nov 22 08:50:58 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [NOTICE]   (252330) : path to executable is /usr/sbin/haproxy
Nov 22 08:50:58 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [WARNING]  (252330) : Exiting Master process...
Nov 22 08:50:58 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [WARNING]  (252330) : Exiting Master process...
Nov 22 08:50:58 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [ALERT]    (252330) : Current worker (252336) exited with code 143 (Terminated)
Nov 22 08:50:58 compute-0 neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2[252310]: [WARNING]  (252330) : All workers exited. Exiting... (0)
Nov 22 08:50:58 compute-0 systemd[1]: libpod-b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c.scope: Deactivated successfully.
Nov 22 08:50:58 compute-0 podman[252989]: 2025-11-22 08:50:58.208310828 +0000 UTC m=+0.275486148 container died b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.259 189273 DEBUG nova.virt.libvirt.vif [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1615837079',display_name='tempest-ServerActionsTestJSON-server-1615837079',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1615837079',id=8,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLdsHFflrgi7wGkvgkOXdCwC+kr9nW2mi1DXZmxLox1ZC0TuSJdcF2M8rMeuABQiSpoDl4gw87gDh3KsMHxzPzzF3d1/1OBKsUUK2YCN1YD+nS62FFKtRtMD4Bx9Y/yudw==',key_name='tempest-keypair-416169958',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:48:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8de05c82cd5c4f7bbe156c45495011c2',ramdisk_id='',reservation_id='r-b52qwrco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-748326472',owner_user_name='tempest-ServerActionsTestJSON-748326472-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:49:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='16843c91d66144f880a31734be4d3dee',uuid=4414e066-bc1a-4a63-b3a0-5e88f0553032,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.260 189273 DEBUG nova.network.os_vif_util [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converting VIF {"id": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "address": "fa:16:3e:7a:63:17", "network": {"id": "3485ad45-c98a-4c02-b9a2-34cc945b16d2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1783802964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8de05c82cd5c4f7bbe156c45495011c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f5ad619-9c", "ovs_interfaceid": "3f5ad619-9cef-49b4-b0fd-8243d3506e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.261 189273 DEBUG nova.network.os_vif_util [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.261 189273 DEBUG os_vif [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.264 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.265 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f5ad619-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.267 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.268 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.272 189273 INFO os_vif [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:63:17,bridge_name='br-int',has_traffic_filtering=True,id=3f5ad619-9cef-49b4-b0fd-8243d3506e32,network=Network(3485ad45-c98a-4c02-b9a2-34cc945b16d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f5ad619-9c')
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.273 189273 INFO nova.virt.libvirt.driver [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Deleting instance files /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032_del
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.277 189273 INFO nova.virt.libvirt.driver [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Deletion of /var/lib/nova/instances/4414e066-bc1a-4a63-b3a0-5e88f0553032_del complete
Nov 22 08:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c-userdata-shm.mount: Deactivated successfully.
Nov 22 08:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ac6e91c93d2c9b84d1ba723cb85e2881cb7c975ccf8f9f9156364b23f390566-merged.mount: Deactivated successfully.
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.338 189273 INFO nova.compute.manager [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.339 189273 DEBUG oslo.service.loopingcall [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.339 189273 DEBUG nova.compute.manager [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.339 189273 DEBUG nova.network.neutron [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:50:58 compute-0 podman[252989]: 2025-11-22 08:50:58.353114841 +0000 UTC m=+0.420290151 container cleanup b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:50:58 compute-0 systemd[1]: libpod-conmon-b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c.scope: Deactivated successfully.
Nov 22 08:50:58 compute-0 podman[253035]: 2025-11-22 08:50:58.547166588 +0000 UTC m=+0.168824100 container remove b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.555 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[870443a0-1217-47cc-97ed-4b02b843a848]: (4, ('Sat Nov 22 08:50:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 (b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c)\nb0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c\nSat Nov 22 08:50:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 (b0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c)\nb0710fa1d6c1d7a0978e00e37b2c2122983d4dbd99d08c8bcd9294e46f69648c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.558 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[60420f3c-31b6-4b11-9896-a7e5fc66fac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.559 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3485ad45-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.561 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:58 compute-0 kernel: tap3485ad45-c0: left promiscuous mode
Nov 22 08:50:58 compute-0 nova_compute[189268]: 2025-11-22 08:50:58.574 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.577 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[becb0e90-1fd7-4e9f-b6eb-b969f5002497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.592 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[d159448b-5ed5-44d3-9720-37afacb1f9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.595 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b57aa6be-462b-4bf7-8c30-5d6852651178]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.614 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[22c10530-a0ef-4e22-bb1f-ed12de7b9d2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650348, 'reachable_time': 31536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253050, 'error': None, 'target': 'ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.616 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3485ad45-c98a-4c02-b9a2-34cc945b16d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:50:58 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:50:58.616 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[468a0741-e8f8-4e12-9090-86f9985e4d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:50:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d3485ad45\x2dc98a\x2d4c02\x2db9a2\x2d34cc945b16d2.mount: Deactivated successfully.
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.008 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.113 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.137 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.138 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.284 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.492 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.494 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5352MB free_disk=72.46037673950195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.494 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.494 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.586 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4414e066-bc1a-4a63-b3a0-5e88f0553032 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.586 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 38817707-1f5a-4596-bfd2-b48048331de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.587 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.587 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.644 189273 DEBUG nova.network.neutron [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.668 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.682 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.687 189273 INFO nova.compute.manager [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Took 1.35 seconds to deallocate network for instance.
Nov 22 08:50:59 compute-0 podman[203476]: time="2025-11-22T08:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:50:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 08:50:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.807 189273 DEBUG nova.compute.manager [req-c3ca5b59-7e88-4f56-a29d-395040866568 req-fc02c168-3bc8-4eee-9bee-b1e8785d8bff 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-deleted-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.947 189273 DEBUG nova.compute.manager [req-3d427d60-c782-42bb-9d24-fe31ae7cf24a req-3049891e-cfda-44e9-98c6-d93fc69df7b3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.947 189273 DEBUG oslo_concurrency.lockutils [req-3d427d60-c782-42bb-9d24-fe31ae7cf24a req-3049891e-cfda-44e9-98c6-d93fc69df7b3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.947 189273 DEBUG oslo_concurrency.lockutils [req-3d427d60-c782-42bb-9d24-fe31ae7cf24a req-3049891e-cfda-44e9-98c6-d93fc69df7b3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.948 189273 DEBUG oslo_concurrency.lockutils [req-3d427d60-c782-42bb-9d24-fe31ae7cf24a req-3049891e-cfda-44e9-98c6-d93fc69df7b3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.948 189273 DEBUG nova.compute.manager [req-3d427d60-c782-42bb-9d24-fe31ae7cf24a req-3049891e-cfda-44e9-98c6-d93fc69df7b3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] No waiting events found dispatching network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:50:59 compute-0 nova_compute[189268]: 2025-11-22 08:50:59.948 189273 WARNING nova.compute.manager [req-3d427d60-c782-42bb-9d24-fe31ae7cf24a req-3049891e-cfda-44e9-98c6-d93fc69df7b3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Received unexpected event network-vif-plugged-3f5ad619-9cef-49b4-b0fd-8243d3506e32 for instance with vm_state active and task_state deleting.
Nov 22 08:51:00 compute-0 podman[253052]: 2025-11-22 08:51:00.144846992 +0000 UTC m=+0.098668414 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:51:01 compute-0 nova_compute[189268]: 2025-11-22 08:51:01.070 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:51:01 compute-0 nova_compute[189268]: 2025-11-22 08:51:01.071 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:01 compute-0 nova_compute[189268]: 2025-11-22 08:51:01.071 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:01 compute-0 nova_compute[189268]: 2025-11-22 08:51:01.071 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:51:01 compute-0 openstack_network_exporter[205661]: ERROR   08:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:51:01 compute-0 openstack_network_exporter[205661]: ERROR   08:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:51:01 compute-0 openstack_network_exporter[205661]: ERROR   08:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:51:01 compute-0 openstack_network_exporter[205661]: ERROR   08:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:51:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:51:01 compute-0 openstack_network_exporter[205661]: ERROR   08:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:51:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:51:01 compute-0 nova_compute[189268]: 2025-11-22 08:51:01.941 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:01 compute-0 nova_compute[189268]: 2025-11-22 08:51:01.941 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:02 compute-0 nova_compute[189268]: 2025-11-22 08:51:02.031 189273 DEBUG nova.compute.provider_tree [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:51:02 compute-0 nova_compute[189268]: 2025-11-22 08:51:02.045 189273 DEBUG nova.scheduler.client.report [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:51:02 compute-0 nova_compute[189268]: 2025-11-22 08:51:02.066 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:02 compute-0 nova_compute[189268]: 2025-11-22 08:51:02.112 189273 INFO nova.scheduler.client.report [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Deleted allocations for instance 4414e066-bc1a-4a63-b3a0-5e88f0553032
Nov 22 08:51:02 compute-0 nova_compute[189268]: 2025-11-22 08:51:02.397 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:02 compute-0 nova_compute[189268]: 2025-11-22 08:51:02.464 189273 DEBUG oslo_concurrency.lockutils [None req-2546bc58-238a-45b2-aba9-383e4b49ebde 16843c91d66144f880a31734be4d3dee 8de05c82cd5c4f7bbe156c45495011c2 - - default default] Lock "4414e066-bc1a-4a63-b3a0-5e88f0553032" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:03 compute-0 nova_compute[189268]: 2025-11-22 08:51:03.268 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:03 compute-0 nova_compute[189268]: 2025-11-22 08:51:03.944 189273 DEBUG nova.network.neutron [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Successfully updated port: 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:51:04 compute-0 nova_compute[189268]: 2025-11-22 08:51:03.999 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:51:04 compute-0 nova_compute[189268]: 2025-11-22 08:51:04.000 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquired lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:51:04 compute-0 nova_compute[189268]: 2025-11-22 08:51:04.000 189273 DEBUG nova.network.neutron [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:51:04 compute-0 nova_compute[189268]: 2025-11-22 08:51:04.092 189273 DEBUG nova.compute.manager [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-changed-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:04 compute-0 nova_compute[189268]: 2025-11-22 08:51:04.093 189273 DEBUG nova.compute.manager [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Refreshing instance network info cache due to event network-changed-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:51:04 compute-0 nova_compute[189268]: 2025-11-22 08:51:04.093 189273 DEBUG oslo_concurrency.lockutils [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:51:04 compute-0 nova_compute[189268]: 2025-11-22 08:51:04.248 189273 DEBUG nova.network.neutron [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.455 189273 DEBUG nova.network.neutron [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updating instance_info_cache with network_info: [{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.503 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Releasing lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.504 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Instance network_info: |[{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.504 189273 DEBUG oslo_concurrency.lockutils [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.505 189273 DEBUG nova.network.neutron [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Refreshing network info cache for port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.508 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Start _get_guest_xml network_info=[{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.516 189273 WARNING nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.522 189273 DEBUG nova.virt.libvirt.host [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.523 189273 DEBUG nova.virt.libvirt.host [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.531 189273 DEBUG nova.virt.libvirt.host [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.531 189273 DEBUG nova.virt.libvirt.host [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.532 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.532 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.532 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.532 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.533 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.533 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.533 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.533 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.533 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.534 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.534 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.534 189273 DEBUG nova.virt.hardware [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.537 189273 DEBUG nova.virt.libvirt.vif [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-472251035',display_name='tempest-TestNetworkBasicOps-server-472251035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-472251035',id=12,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL9vycT5NJv7h5GytTrKsGClvziWtZCPE2ibnv98G7plGcyXOOnBvQoSMG5BU87Xual/uEqsQJDZ+kok1766O/+Mm3LWOYUghijS4tCtVJk5eyI0zce0gefqvKXvW6kXXQ==',key_name='tempest-TestNetworkBasicOps-203402494',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97da7a1b46046e59c36f5af412de432',ramdisk_id='',reservation_id='r-bfgkwdxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1679658819',owner_user_name='tempest-TestNetworkBasicOps-1679658819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:50:54Z,user_data=None,user_id='584cc3e3a5224a2e9a08273882841998',uuid=38817707-1f5a-4596-bfd2-b48048331de7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.537 189273 DEBUG nova.network.os_vif_util [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converting VIF {"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.538 189273 DEBUG nova.network.os_vif_util [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:15:7f,bridge_name='br-int',has_traffic_filtering=True,id=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a2be7e7-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.539 189273 DEBUG nova.objects.instance [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38817707-1f5a-4596-bfd2-b48048331de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.556 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <uuid>38817707-1f5a-4596-bfd2-b48048331de7</uuid>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <name>instance-0000000c</name>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <nova:name>tempest-TestNetworkBasicOps-server-472251035</nova:name>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:51:05</nova:creationTime>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:user uuid="584cc3e3a5224a2e9a08273882841998">tempest-TestNetworkBasicOps-1679658819-project-member</nova:user>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:project uuid="b97da7a1b46046e59c36f5af412de432">tempest-TestNetworkBasicOps-1679658819</nova:project>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         <nova:port uuid="1a2be7e7-4a90-44c8-bdf7-adac66f1e84d">
Nov 22 08:51:05 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <system>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <entry name="serial">38817707-1f5a-4596-bfd2-b48048331de7</entry>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <entry name="uuid">38817707-1f5a-4596-bfd2-b48048331de7</entry>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </system>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <os>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   </os>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <features>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   </features>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk.config"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:7a:15:7f"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <target dev="tap1a2be7e7-4a"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/console.log" append="off"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <video>
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </video>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:51:05 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:51:05 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:51:05 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:51:05 compute-0 nova_compute[189268]: </domain>
Nov 22 08:51:05 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.557 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Preparing to wait for external event network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.557 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.558 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.558 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.559 189273 DEBUG nova.virt.libvirt.vif [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-472251035',display_name='tempest-TestNetworkBasicOps-server-472251035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-472251035',id=12,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL9vycT5NJv7h5GytTrKsGClvziWtZCPE2ibnv98G7plGcyXOOnBvQoSMG5BU87Xual/uEqsQJDZ+kok1766O/+Mm3LWOYUghijS4tCtVJk5eyI0zce0gefqvKXvW6kXXQ==',key_name='tempest-TestNetworkBasicOps-203402494',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97da7a1b46046e59c36f5af412de432',ramdisk_id='',reservation_id='r-bfgkwdxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1679658819',owner_user_name='tempest-TestNetworkBasicOps-1679658819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:50:54Z,user_data=None,user_id='584cc3e3a5224a2e9a08273882841998',uuid=38817707-1f5a-4596-bfd2-b48048331de7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.559 189273 DEBUG nova.network.os_vif_util [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converting VIF {"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.560 189273 DEBUG nova.network.os_vif_util [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:15:7f,bridge_name='br-int',has_traffic_filtering=True,id=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a2be7e7-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.560 189273 DEBUG os_vif [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:15:7f,bridge_name='br-int',has_traffic_filtering=True,id=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a2be7e7-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.560 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.561 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.561 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.564 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.564 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a2be7e7-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.564 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a2be7e7-4a, col_values=(('external_ids', {'iface-id': '1a2be7e7-4a90-44c8-bdf7-adac66f1e84d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:15:7f', 'vm-uuid': '38817707-1f5a-4596-bfd2-b48048331de7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.566 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:05 compute-0 NetworkManager[56326]: <info>  [1763801465.5670] manager: (tap1a2be7e7-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.568 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.573 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.574 189273 INFO os_vif [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:15:7f,bridge_name='br-int',has_traffic_filtering=True,id=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a2be7e7-4a')
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.625 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.626 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.626 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] No VIF found with MAC fa:16:3e:7a:15:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:51:05 compute-0 nova_compute[189268]: 2025-11-22 08:51:05.626 189273 INFO nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Using config drive
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.186 189273 INFO nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Creating config drive at /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk.config
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.192 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkdoqj2i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.319 189273 DEBUG oslo_concurrency.processutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkdoqj2i" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:51:06 compute-0 kernel: tap1a2be7e7-4a: entered promiscuous mode
Nov 22 08:51:06 compute-0 NetworkManager[56326]: <info>  [1763801466.3888] manager: (tap1a2be7e7-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 22 08:51:06 compute-0 ovn_controller[97783]: 2025-11-22T08:51:06Z|00136|binding|INFO|Claiming lport 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d for this chassis.
Nov 22 08:51:06 compute-0 ovn_controller[97783]: 2025-11-22T08:51:06Z|00137|binding|INFO|1a2be7e7-4a90-44c8-bdf7-adac66f1e84d: Claiming fa:16:3e:7a:15:7f 10.100.0.3
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.389 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.403 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:15:7f 10.100.0.3'], port_security=['fa:16:3e:7a:15:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '38817707-1f5a-4596-bfd2-b48048331de7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97da7a1b46046e59c36f5af412de432', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04ad741a-81e1-45be-b72e-4b39973817da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42356185-0f5c-4367-9443-beeb712f6f09, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.404 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d in datapath 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 bound to our chassis
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.405 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.416 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec6e28e-5e57-4190-970b-d81953c0d1f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.417 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cf0b2bb-a1 in ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.419 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cf0b2bb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.419 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4f28ac-ecb9-4f1b-b29f-1eecc1acdf1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.420 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[94400537-0d73-4e3e-b160-1141da782557]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 systemd-machined[155703]: New machine qemu-13-instance-0000000c.
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.434 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[c4934b6e-1b25-40e0-80c4-505689786bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.454 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:06 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.455 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[317edda0-7c0c-4e38-94c0-80c23cffec93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_controller[97783]: 2025-11-22T08:51:06Z|00138|binding|INFO|Setting lport 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d ovn-installed in OVS
Nov 22 08:51:06 compute-0 ovn_controller[97783]: 2025-11-22T08:51:06Z|00139|binding|INFO|Setting lport 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d up in Southbound
Nov 22 08:51:06 compute-0 systemd-udevd[253099]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.471 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:06 compute-0 NetworkManager[56326]: <info>  [1763801466.4830] device (tap1a2be7e7-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:51:06 compute-0 NetworkManager[56326]: <info>  [1763801466.4877] device (tap1a2be7e7-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.487 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[671041eb-f0b7-4c66-b4b0-4faf944e62b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 NetworkManager[56326]: <info>  [1763801466.4950] manager: (tap5cf0b2bb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.494 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3e8252-9103-4f1d-9a14-0e36cf962f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.521 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[70845497-0d7f-4915-a460-6d102f392340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.523 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9157f4-8469-4b94-9bb7-dd6fc0e030a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 NetworkManager[56326]: <info>  [1763801466.5466] device (tap5cf0b2bb-a0): carrier: link connected
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.551 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[da200cb2-5910-46f2-a321-e90fd28d6103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.566 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[32a6cf6b-e61a-4051-b8a0-f654b51d5c2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cf0b2bb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:a1:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658013, 'reachable_time': 31479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253128, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.580 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cea471-a4f1-4d19-83b5-eb940fbadf6d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:a1df'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658013, 'tstamp': 658013}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253129, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.592 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[be9853a2-637d-451d-ab4d-1816d860eab0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cf0b2bb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:a1:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658013, 'reachable_time': 31479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253130, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.615 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1b4d12-000d-4cd8-bef4-77cabda2bc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.672 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[c44c3766-1d4d-45b2-9db2-322ccec3bcc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.673 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cf0b2bb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.674 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.674 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cf0b2bb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.676 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:06 compute-0 kernel: tap5cf0b2bb-a0: entered promiscuous mode
Nov 22 08:51:06 compute-0 NetworkManager[56326]: <info>  [1763801466.6800] manager: (tap5cf0b2bb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.682 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.683 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cf0b2bb-a0, col_values=(('external_ids', {'iface-id': '7ba31b4f-cb70-4305-a919-49ac9f8bddd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.685 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:06 compute-0 ovn_controller[97783]: 2025-11-22T08:51:06Z|00140|binding|INFO|Releasing lport 7ba31b4f-cb70-4305-a919-49ac9f8bddd1 from this chassis (sb_readonly=0)
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.686 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.687 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.688 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a563958b-40d8-4e4b-849b-ecdb020069cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.690 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3.pid.haproxy
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:51:06 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:06.691 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'env', 'PROCESS_TAG=haproxy-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:51:06 compute-0 nova_compute[189268]: 2025-11-22 08:51:06.704 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.156 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801467.1549826, 38817707-1f5a-4596-bfd2-b48048331de7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.158 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] VM Started (Lifecycle Event)
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.183 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.191 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801467.1576338, 38817707-1f5a-4596-bfd2-b48048331de7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.192 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] VM Paused (Lifecycle Event)
Nov 22 08:51:07 compute-0 podman[253165]: 2025-11-22 08:51:07.100117584 +0000 UTC m=+0.031791255 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.209 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.215 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:51:07 compute-0 podman[253165]: 2025-11-22 08:51:07.226651587 +0000 UTC m=+0.158325248 container create e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.236 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.289 189273 DEBUG nova.compute.manager [req-2599033d-8ed9-4593-88d8-608afc703e5b req-7ed14b88-881c-45cf-9183-deb3bd3bcbe1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.289 189273 DEBUG oslo_concurrency.lockutils [req-2599033d-8ed9-4593-88d8-608afc703e5b req-7ed14b88-881c-45cf-9183-deb3bd3bcbe1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.289 189273 DEBUG oslo_concurrency.lockutils [req-2599033d-8ed9-4593-88d8-608afc703e5b req-7ed14b88-881c-45cf-9183-deb3bd3bcbe1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.290 189273 DEBUG oslo_concurrency.lockutils [req-2599033d-8ed9-4593-88d8-608afc703e5b req-7ed14b88-881c-45cf-9183-deb3bd3bcbe1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.290 189273 DEBUG nova.compute.manager [req-2599033d-8ed9-4593-88d8-608afc703e5b req-7ed14b88-881c-45cf-9183-deb3bd3bcbe1 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Processing event network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.291 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.296 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801467.295662, 38817707-1f5a-4596-bfd2-b48048331de7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.296 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] VM Resumed (Lifecycle Event)
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.298 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.303 189273 INFO nova.virt.libvirt.driver [-] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Instance spawned successfully.
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.303 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.317 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:07 compute-0 systemd[1]: Started libpod-conmon-e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072.scope.
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.323 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.331 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.331 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.332 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.332 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.333 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.333 189273 DEBUG nova.virt.libvirt.driver [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b1277e07964988bdc9ea576564fa9dd5d76dd5113bc767b48f819d067085ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.357 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:51:07 compute-0 podman[253165]: 2025-11-22 08:51:07.397759997 +0000 UTC m=+0.329433678 container init e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.400 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:07 compute-0 podman[253165]: 2025-11-22 08:51:07.4056658 +0000 UTC m=+0.337339451 container start e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:51:07 compute-0 neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3[253180]: [NOTICE]   (253185) : New worker (253187) forked
Nov 22 08:51:07 compute-0 neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3[253180]: [NOTICE]   (253185) : Loading success.
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.482 189273 INFO nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Took 12.92 seconds to spawn the instance on the hypervisor.
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.482 189273 DEBUG nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.575 189273 INFO nova.compute.manager [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Took 13.68 seconds to build instance.
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.610 189273 DEBUG oslo_concurrency.lockutils [None req-b11e6ac5-ce56-4d0f-bb06-3e09bf54f604 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.857 189273 DEBUG nova.network.neutron [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updated VIF entry in instance network info cache for port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.857 189273 DEBUG nova.network.neutron [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updating instance_info_cache with network_info: [{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:07 compute-0 nova_compute[189268]: 2025-11-22 08:51:07.874 189273 DEBUG oslo_concurrency.lockutils [req-fe7c3540-fbe9-46c1-942e-e01003041ca0 req-40f2de98-14dd-443f-9f5b-57bf00fabf26 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:51:09 compute-0 nova_compute[189268]: 2025-11-22 08:51:09.769 189273 DEBUG nova.compute.manager [req-85718f85-495a-41ba-a442-5407e9179fd7 req-5fc455ee-e0db-4738-a98d-c0dffe8f8842 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:09 compute-0 nova_compute[189268]: 2025-11-22 08:51:09.770 189273 DEBUG oslo_concurrency.lockutils [req-85718f85-495a-41ba-a442-5407e9179fd7 req-5fc455ee-e0db-4738-a98d-c0dffe8f8842 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:09 compute-0 nova_compute[189268]: 2025-11-22 08:51:09.770 189273 DEBUG oslo_concurrency.lockutils [req-85718f85-495a-41ba-a442-5407e9179fd7 req-5fc455ee-e0db-4738-a98d-c0dffe8f8842 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:09 compute-0 nova_compute[189268]: 2025-11-22 08:51:09.771 189273 DEBUG oslo_concurrency.lockutils [req-85718f85-495a-41ba-a442-5407e9179fd7 req-5fc455ee-e0db-4738-a98d-c0dffe8f8842 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:09 compute-0 nova_compute[189268]: 2025-11-22 08:51:09.771 189273 DEBUG nova.compute.manager [req-85718f85-495a-41ba-a442-5407e9179fd7 req-5fc455ee-e0db-4738-a98d-c0dffe8f8842 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] No waiting events found dispatching network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:51:09 compute-0 nova_compute[189268]: 2025-11-22 08:51:09.772 189273 WARNING nova.compute.manager [req-85718f85-495a-41ba-a442-5407e9179fd7 req-5fc455ee-e0db-4738-a98d-c0dffe8f8842 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received unexpected event network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d for instance with vm_state active and task_state None.
Nov 22 08:51:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:09.993 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:09.994 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:09.994 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:10 compute-0 podman[253197]: 2025-11-22 08:51:10.108219749 +0000 UTC m=+0.064439754 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:51:10 compute-0 podman[253198]: 2025-11-22 08:51:10.14172719 +0000 UTC m=+0.095675474 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 08:51:10 compute-0 podman[253196]: 2025-11-22 08:51:10.144051371 +0000 UTC m=+0.102278540 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:51:10 compute-0 nova_compute[189268]: 2025-11-22 08:51:10.567 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:12 compute-0 nova_compute[189268]: 2025-11-22 08:51:12.402 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:12 compute-0 sshd-session[253253]: Invalid user hadoop from 80.94.92.164 port 36846
Nov 22 08:51:13 compute-0 nova_compute[189268]: 2025-11-22 08:51:13.055 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801458.0529528, 4414e066-bc1a-4a63-b3a0-5e88f0553032 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:13 compute-0 nova_compute[189268]: 2025-11-22 08:51:13.055 189273 INFO nova.compute.manager [-] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] VM Stopped (Lifecycle Event)
Nov 22 08:51:13 compute-0 nova_compute[189268]: 2025-11-22 08:51:13.072 189273 DEBUG nova.compute.manager [None req-a8df6632-d666-484a-84f2-8679cc4ba24d - - - - - -] [instance: 4414e066-bc1a-4a63-b3a0-5e88f0553032] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:13 compute-0 sshd-session[253253]: Connection closed by invalid user hadoop 80.94.92.164 port 36846 [preauth]
Nov 22 08:51:14 compute-0 nova_compute[189268]: 2025-11-22 08:51:14.700 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:14 compute-0 NetworkManager[56326]: <info>  [1763801474.7019] manager: (patch-provnet-4626db62-a226-41d4-b94f-04168db037c0-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 22 08:51:14 compute-0 NetworkManager[56326]: <info>  [1763801474.7046] manager: (patch-br-int-to-provnet-4626db62-a226-41d4-b94f-04168db037c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 22 08:51:14 compute-0 nova_compute[189268]: 2025-11-22 08:51:14.828 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:14 compute-0 ovn_controller[97783]: 2025-11-22T08:51:14Z|00141|binding|INFO|Releasing lport 7ba31b4f-cb70-4305-a919-49ac9f8bddd1 from this chassis (sb_readonly=0)
Nov 22 08:51:14 compute-0 nova_compute[189268]: 2025-11-22 08:51:14.847 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:15 compute-0 nova_compute[189268]: 2025-11-22 08:51:15.570 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:15 compute-0 nova_compute[189268]: 2025-11-22 08:51:15.716 189273 DEBUG nova.compute.manager [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-changed-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:15 compute-0 nova_compute[189268]: 2025-11-22 08:51:15.717 189273 DEBUG nova.compute.manager [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Refreshing instance network info cache due to event network-changed-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:51:15 compute-0 nova_compute[189268]: 2025-11-22 08:51:15.717 189273 DEBUG oslo_concurrency.lockutils [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:51:15 compute-0 nova_compute[189268]: 2025-11-22 08:51:15.718 189273 DEBUG oslo_concurrency.lockutils [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:51:15 compute-0 nova_compute[189268]: 2025-11-22 08:51:15.718 189273 DEBUG nova.network.neutron [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Refreshing network info cache for port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.404 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.887 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.888 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.909 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.981 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.982 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.991 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:51:17 compute-0 nova_compute[189268]: 2025-11-22 08:51:17.992 189273 INFO nova.compute.claims [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.148 189273 DEBUG nova.network.neutron [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updated VIF entry in instance network info cache for port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.151 189273 DEBUG nova.network.neutron [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updating instance_info_cache with network_info: [{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.154 189273 DEBUG nova.compute.provider_tree [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.176 189273 DEBUG oslo_concurrency.lockutils [req-bf08156a-5113-4e9c-87af-17a64620237b req-f0b5d3fc-0a2d-4bc3-8f77-87b0aa29e976 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.178 189273 DEBUG nova.scheduler.client.report [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.197 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.198 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.249 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.251 189273 DEBUG nova.network.neutron [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.279 189273 INFO nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.303 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.425 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.435 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.436 189273 INFO nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Creating image(s)
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.437 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "/var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.438 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "/var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.438 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "/var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.452 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.513 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.515 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.516 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.528 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.587 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.589 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.637 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.638 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.639 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.698 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.700 189273 DEBUG nova.virt.disk.api [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Checking if we can resize image /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.701 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.737 189273 DEBUG nova.policy [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a25c34d06a84df687860465cf2eada0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09d51c6de735419ea20d768f11d957d9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.758 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.759 189273 DEBUG nova.virt.disk.api [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Cannot resize image /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.759 189273 DEBUG nova.objects.instance [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lazy-loading 'migration_context' on Instance uuid ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.783 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.784 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Ensure instance console log exists: /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.785 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.785 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:18 compute-0 nova_compute[189268]: 2025-11-22 08:51:18.786 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:19 compute-0 nova_compute[189268]: 2025-11-22 08:51:19.933 189273 DEBUG nova.network.neutron [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Successfully created port: fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:51:20 compute-0 podman[253273]: 2025-11-22 08:51:20.13231959 +0000 UTC m=+0.078238895 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 08:51:20 compute-0 podman[253272]: 2025-11-22 08:51:20.147681373 +0000 UTC m=+0.097809391 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:51:20 compute-0 nova_compute[189268]: 2025-11-22 08:51:20.574 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.598 189273 DEBUG nova.network.neutron [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Successfully updated port: fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.621 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "refresh_cache-ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.623 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquired lock "refresh_cache-ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.624 189273 DEBUG nova.network.neutron [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.874 189273 DEBUG nova.compute.manager [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received event network-changed-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.875 189273 DEBUG nova.compute.manager [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Refreshing instance network info cache due to event network-changed-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.875 189273 DEBUG oslo_concurrency.lockutils [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:51:21 compute-0 nova_compute[189268]: 2025-11-22 08:51:21.936 189273 DEBUG nova.network.neutron [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:51:22 compute-0 nova_compute[189268]: 2025-11-22 08:51:22.406 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:23 compute-0 podman[253312]: 2025-11-22 08:51:23.161187941 +0000 UTC m=+0.118464225 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., container_name=kepler, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, distribution-scope=public, release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 22 08:51:23 compute-0 podman[253313]: 2025-11-22 08:51:23.20464032 +0000 UTC m=+0.159281003 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.719 189273 DEBUG nova.network.neutron [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Updating instance_info_cache with network_info: [{"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.753 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Releasing lock "refresh_cache-ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.753 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Instance network_info: |[{"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.754 189273 DEBUG oslo_concurrency.lockutils [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.755 189273 DEBUG nova.network.neutron [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Refreshing network info cache for port fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.757 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Start _get_guest_xml network_info=[{"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.764 189273 WARNING nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.769 189273 DEBUG nova.virt.libvirt.host [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.771 189273 DEBUG nova.virt.libvirt.host [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.779 189273 DEBUG nova.virt.libvirt.host [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.779 189273 DEBUG nova.virt.libvirt.host [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.780 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.781 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.781 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.782 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.782 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.783 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.783 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.784 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.784 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.784 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.785 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.785 189273 DEBUG nova.virt.hardware [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.789 189273 DEBUG nova.virt.libvirt.vif [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1242984221',display_name='tempest-ServerAddressesTestJSON-server-1242984221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1242984221',id=13,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09d51c6de735419ea20d768f11d957d9',ramdisk_id='',reservation_id='r-1ogszjgk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-44581399',owner_user_name='tempest-ServerAddressesTestJSON-44581399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:51:18Z,user_data=None,user_id='0a25c34d06a84df687860465cf2eada0',uuid=ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.789 189273 DEBUG nova.network.os_vif_util [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Converting VIF {"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.790 189273 DEBUG nova.network.os_vif_util [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:af:f2,bridge_name='br-int',has_traffic_filtering=True,id=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7,network=Network(31c28528-025f-461b-8564-53ea033211c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb3c52db-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.791 189273 DEBUG nova.objects.instance [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.808 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <uuid>ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1</uuid>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <name>instance-0000000d</name>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <nova:name>tempest-ServerAddressesTestJSON-server-1242984221</nova:name>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:51:23</nova:creationTime>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:user uuid="0a25c34d06a84df687860465cf2eada0">tempest-ServerAddressesTestJSON-44581399-project-member</nova:user>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:project uuid="09d51c6de735419ea20d768f11d957d9">tempest-ServerAddressesTestJSON-44581399</nova:project>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         <nova:port uuid="fb3c52db-5aeb-4b04-b9b9-fc119e8654c7">
Nov 22 08:51:23 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <system>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <entry name="serial">ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1</entry>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <entry name="uuid">ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1</entry>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </system>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <os>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   </os>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <features>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   </features>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk.config"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:61:af:f2"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <target dev="tapfb3c52db-5a"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/console.log" append="off"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <video>
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </video>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:51:23 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:51:23 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:51:23 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:51:23 compute-0 nova_compute[189268]: </domain>
Nov 22 08:51:23 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.818 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Preparing to wait for external event network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.818 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.818 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.818 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.819 189273 DEBUG nova.virt.libvirt.vif [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1242984221',display_name='tempest-ServerAddressesTestJSON-server-1242984221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1242984221',id=13,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09d51c6de735419ea20d768f11d957d9',ramdisk_id='',reservation_id='r-1ogszjgk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-44581399',owner_user_name='tempest-ServerAddressesTestJSON-44581399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:51:18Z,user_data=None,user_id='0a25c34d06a84df687860465cf2eada0',uuid=ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.819 189273 DEBUG nova.network.os_vif_util [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Converting VIF {"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.820 189273 DEBUG nova.network.os_vif_util [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:af:f2,bridge_name='br-int',has_traffic_filtering=True,id=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7,network=Network(31c28528-025f-461b-8564-53ea033211c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb3c52db-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.820 189273 DEBUG os_vif [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:af:f2,bridge_name='br-int',has_traffic_filtering=True,id=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7,network=Network(31c28528-025f-461b-8564-53ea033211c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb3c52db-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.821 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.823 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.824 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.826 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.827 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb3c52db-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.828 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb3c52db-5a, col_values=(('external_ids', {'iface-id': 'fb3c52db-5aeb-4b04-b9b9-fc119e8654c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:af:f2', 'vm-uuid': 'ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.829 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:23 compute-0 NetworkManager[56326]: <info>  [1763801483.8323] manager: (tapfb3c52db-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.833 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.837 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.838 189273 INFO os_vif [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:af:f2,bridge_name='br-int',has_traffic_filtering=True,id=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7,network=Network(31c28528-025f-461b-8564-53ea033211c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb3c52db-5a')
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.888 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.889 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.890 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] No VIF found with MAC fa:16:3e:61:af:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:51:23 compute-0 nova_compute[189268]: 2025-11-22 08:51:23.891 189273 INFO nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Using config drive
Nov 22 08:51:24 compute-0 nova_compute[189268]: 2025-11-22 08:51:24.511 189273 INFO nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Creating config drive at /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk.config
Nov 22 08:51:24 compute-0 nova_compute[189268]: 2025-11-22 08:51:24.522 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa9h67qnh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:51:24 compute-0 nova_compute[189268]: 2025-11-22 08:51:24.658 189273 DEBUG oslo_concurrency.processutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa9h67qnh" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:51:24 compute-0 kernel: tapfb3c52db-5a: entered promiscuous mode
Nov 22 08:51:24 compute-0 nova_compute[189268]: 2025-11-22 08:51:24.732 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:24 compute-0 ovn_controller[97783]: 2025-11-22T08:51:24Z|00142|binding|INFO|Claiming lport fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 for this chassis.
Nov 22 08:51:24 compute-0 ovn_controller[97783]: 2025-11-22T08:51:24Z|00143|binding|INFO|fb3c52db-5aeb-4b04-b9b9-fc119e8654c7: Claiming fa:16:3e:61:af:f2 10.100.0.8
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.741 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:af:f2 10.100.0.8'], port_security=['fa:16:3e:61:af:f2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31c28528-025f-461b-8564-53ea033211c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09d51c6de735419ea20d768f11d957d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6fc146f5-b66a-4ffe-b46e-dbdf8c809e55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1f655b5-b928-4729-b932-12d31ad8ab6d, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.743 106642 INFO neutron.agent.ovn.metadata.agent [-] Port fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 in datapath 31c28528-025f-461b-8564-53ea033211c5 bound to our chassis
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.744 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31c28528-025f-461b-8564-53ea033211c5
Nov 22 08:51:24 compute-0 ovn_controller[97783]: 2025-11-22T08:51:24Z|00144|binding|INFO|Setting lport fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 ovn-installed in OVS
Nov 22 08:51:24 compute-0 ovn_controller[97783]: 2025-11-22T08:51:24Z|00145|binding|INFO|Setting lport fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 up in Southbound
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.756 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[0f562fdc-9576-45c8-b64b-aafe0e77d1ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 NetworkManager[56326]: <info>  [1763801484.7583] manager: (tapfb3c52db-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 22 08:51:24 compute-0 nova_compute[189268]: 2025-11-22 08:51:24.758 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.758 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31c28528-01 in ovnmeta-31c28528-025f-461b-8564-53ea033211c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:51:24 compute-0 nova_compute[189268]: 2025-11-22 08:51:24.761 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.761 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31c28528-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.761 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[75e39873-0408-4541-989b-34a2ba777559]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.766 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb3f012-0f97-4b3c-997e-2f2b97d80139]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.779 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[0da7c51b-5d06-480b-b7a0-56defd35106c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 systemd-udevd[253378]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:51:24 compute-0 systemd-machined[155703]: New machine qemu-14-instance-0000000d.
Nov 22 08:51:24 compute-0 NetworkManager[56326]: <info>  [1763801484.7995] device (tapfb3c52db-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:51:24 compute-0 NetworkManager[56326]: <info>  [1763801484.8003] device (tapfb3c52db-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.806 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[56295a07-fab5-4efb-8596-c746e628d25d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.840 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[f6467067-87ac-442c-81e8-e47480eb27cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 NetworkManager[56326]: <info>  [1763801484.8527] manager: (tap31c28528-00): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 22 08:51:24 compute-0 systemd-udevd[253382]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.852 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6049ac-6775-42b1-a1bd-b46ee766e027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.890 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[626f05e8-85be-4897-a939-c833d40c783f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.893 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[c418e158-1770-40a2-b772-67a4a121158a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 NetworkManager[56326]: <info>  [1763801484.9190] device (tap31c28528-00): carrier: link connected
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.926 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee33233-0718-425f-a54e-5d91fedcd8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.943 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f60ab6-5284-4b81-85de-8b28faaf3b9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31c28528-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:61:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659850, 'reachable_time': 31203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253410, 'error': None, 'target': 'ovnmeta-31c28528-025f-461b-8564-53ea033211c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.959 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[2c802e1e-92b1-4078-b48d-ae796ab1e069]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:61bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659850, 'tstamp': 659850}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253411, 'error': None, 'target': 'ovnmeta-31c28528-025f-461b-8564-53ea033211c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:24 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:24.975 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[81420057-528f-4877-a6cb-c68416865aff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31c28528-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:61:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659850, 'reachable_time': 31203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253412, 'error': None, 'target': 'ovnmeta-31c28528-025f-461b-8564-53ea033211c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.008 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcc833d-960b-4579-9259-99f244d41a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.086 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[d63f3278-61fd-4914-8b1f-167e5aa504f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.087 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31c28528-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.087 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.087 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31c28528-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:25 compute-0 NetworkManager[56326]: <info>  [1763801485.0900] manager: (tap31c28528-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.089 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:25 compute-0 kernel: tap31c28528-00: entered promiscuous mode
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.096 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.098 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31c28528-00, col_values=(('external_ids', {'iface-id': 'a78c563c-1f38-429f-8335-d8b0549c0cf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:25 compute-0 ovn_controller[97783]: 2025-11-22T08:51:25Z|00146|binding|INFO|Releasing lport a78c563c-1f38-429f-8335-d8b0549c0cf2 from this chassis (sb_readonly=0)
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.103 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.115 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.118 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31c28528-025f-461b-8564-53ea033211c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31c28528-025f-461b-8564-53ea033211c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.119 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[006d3ec6-7fce-42b3-852e-a2317e0c7701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.120 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-31c28528-025f-461b-8564-53ea033211c5
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/31c28528-025f-461b-8564-53ea033211c5.pid.haproxy
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 31c28528-025f-461b-8564-53ea033211c5
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:51:25 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:25.121 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31c28528-025f-461b-8564-53ea033211c5', 'env', 'PROCESS_TAG=haproxy-31c28528-025f-461b-8564-53ea033211c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31c28528-025f-461b-8564-53ea033211c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.154 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801485.1543343, ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.155 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] VM Started (Lifecycle Event)
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.179 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.185 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801485.1544821, ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.186 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] VM Paused (Lifecycle Event)
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.203 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.209 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:51:25 compute-0 nova_compute[189268]: 2025-11-22 08:51:25.226 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:51:25 compute-0 podman[253450]: 2025-11-22 08:51:25.50219938 +0000 UTC m=+0.056503820 container create f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:51:25 compute-0 systemd[1]: Started libpod-conmon-f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90.scope.
Nov 22 08:51:25 compute-0 podman[253450]: 2025-11-22 08:51:25.472287106 +0000 UTC m=+0.026591586 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:51:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489cf289602a5a72d1f525c15dff6d4db4d1065c46cfc134a32b61b9fba7f3b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:51:25 compute-0 podman[253450]: 2025-11-22 08:51:25.617983973 +0000 UTC m=+0.172288443 container init f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:51:25 compute-0 podman[253450]: 2025-11-22 08:51:25.627494179 +0000 UTC m=+0.181798629 container start f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:51:25 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [NOTICE]   (253469) : New worker (253471) forked
Nov 22 08:51:25 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [NOTICE]   (253469) : Loading success.
Nov 22 08:51:25 compute-0 sshd-session[253480]: banner exchange: Connection from 118.202.54.15 port 36347: invalid format
Nov 22 08:51:27 compute-0 nova_compute[189268]: 2025-11-22 08:51:27.387 189273 DEBUG nova.network.neutron [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Updated VIF entry in instance network info cache for port fb3c52db-5aeb-4b04-b9b9-fc119e8654c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:51:27 compute-0 nova_compute[189268]: 2025-11-22 08:51:27.389 189273 DEBUG nova.network.neutron [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Updating instance_info_cache with network_info: [{"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:27 compute-0 nova_compute[189268]: 2025-11-22 08:51:27.404 189273 DEBUG oslo_concurrency.lockutils [req-cf4e4403-48ed-4325-9705-8d15e53e906f req-fb5c39f1-4b28-4c79-8f26-1abb3d96054a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:51:27 compute-0 nova_compute[189268]: 2025-11-22 08:51:27.408 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:27 compute-0 nova_compute[189268]: 2025-11-22 08:51:27.945 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:28 compute-0 podman[253481]: 2025-11-22 08:51:28.105803469 +0000 UTC m=+0.063399635 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Nov 22 08:51:28 compute-0 nova_compute[189268]: 2025-11-22 08:51:28.831 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:29 compute-0 podman[203476]: time="2025-11-22T08:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:51:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30754 "" "Go-http-client/1.1"
Nov 22 08:51:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5262 "" "Go-http-client/1.1"
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.186 189273 DEBUG nova.compute.manager [req-1fc26773-38c1-454a-bad8-bd245cf9b05e req-6ce1b90c-1c8d-4057-84f4-e6fdcc162fc3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received event network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.187 189273 DEBUG oslo_concurrency.lockutils [req-1fc26773-38c1-454a-bad8-bd245cf9b05e req-6ce1b90c-1c8d-4057-84f4-e6fdcc162fc3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.187 189273 DEBUG oslo_concurrency.lockutils [req-1fc26773-38c1-454a-bad8-bd245cf9b05e req-6ce1b90c-1c8d-4057-84f4-e6fdcc162fc3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.187 189273 DEBUG oslo_concurrency.lockutils [req-1fc26773-38c1-454a-bad8-bd245cf9b05e req-6ce1b90c-1c8d-4057-84f4-e6fdcc162fc3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.188 189273 DEBUG nova.compute.manager [req-1fc26773-38c1-454a-bad8-bd245cf9b05e req-6ce1b90c-1c8d-4057-84f4-e6fdcc162fc3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Processing event network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.188 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.193 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801490.193186, ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.194 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] VM Resumed (Lifecycle Event)
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.197 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.203 189273 INFO nova.virt.libvirt.driver [-] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Instance spawned successfully.
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.204 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.212 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.223 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.229 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.230 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.231 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.231 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.232 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.233 189273 DEBUG nova.virt.libvirt.driver [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.241 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.365 189273 INFO nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Took 11.94 seconds to spawn the instance on the hypervisor.
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.366 189273 DEBUG nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.446 189273 INFO nova.compute.manager [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Took 12.49 seconds to build instance.
Nov 22 08:51:30 compute-0 nova_compute[189268]: 2025-11-22 08:51:30.485 189273 DEBUG oslo_concurrency.lockutils [None req-6434b87c-d926-4cd7-935b-0b2e80ebc9ca 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:31 compute-0 podman[253502]: 2025-11-22 08:51:31.120431178 +0000 UTC m=+0.074699829 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:51:31 compute-0 openstack_network_exporter[205661]: ERROR   08:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:51:31 compute-0 openstack_network_exporter[205661]: ERROR   08:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:51:31 compute-0 openstack_network_exporter[205661]: ERROR   08:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:51:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:51:31 compute-0 openstack_network_exporter[205661]: ERROR   08:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:51:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:51:31 compute-0 openstack_network_exporter[205661]: ERROR   08:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:51:32 compute-0 nova_compute[189268]: 2025-11-22 08:51:32.376 189273 DEBUG nova.compute.manager [req-b10dbcb3-31b5-4882-9f54-fcda2134ba2b req-c7c169ce-9e50-4572-9cdf-d7c78a3b1d13 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received event network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:32 compute-0 nova_compute[189268]: 2025-11-22 08:51:32.377 189273 DEBUG oslo_concurrency.lockutils [req-b10dbcb3-31b5-4882-9f54-fcda2134ba2b req-c7c169ce-9e50-4572-9cdf-d7c78a3b1d13 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:32 compute-0 nova_compute[189268]: 2025-11-22 08:51:32.378 189273 DEBUG oslo_concurrency.lockutils [req-b10dbcb3-31b5-4882-9f54-fcda2134ba2b req-c7c169ce-9e50-4572-9cdf-d7c78a3b1d13 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:32 compute-0 nova_compute[189268]: 2025-11-22 08:51:32.378 189273 DEBUG oslo_concurrency.lockutils [req-b10dbcb3-31b5-4882-9f54-fcda2134ba2b req-c7c169ce-9e50-4572-9cdf-d7c78a3b1d13 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:32 compute-0 nova_compute[189268]: 2025-11-22 08:51:32.379 189273 DEBUG nova.compute.manager [req-b10dbcb3-31b5-4882-9f54-fcda2134ba2b req-c7c169ce-9e50-4572-9cdf-d7c78a3b1d13 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] No waiting events found dispatching network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:51:32 compute-0 nova_compute[189268]: 2025-11-22 08:51:32.379 189273 WARNING nova.compute.manager [req-b10dbcb3-31b5-4882-9f54-fcda2134ba2b req-c7c169ce-9e50-4572-9cdf-d7c78a3b1d13 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received unexpected event network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 for instance with vm_state active and task_state None.
Nov 22 08:51:32 compute-0 nova_compute[189268]: 2025-11-22 08:51:32.413 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:33 compute-0 nova_compute[189268]: 2025-11-22 08:51:33.170 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:33 compute-0 nova_compute[189268]: 2025-11-22 08:51:33.834 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.233 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.234 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.234 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.235 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.235 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.237 189273 INFO nova.compute.manager [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Terminating instance
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.238 189273 DEBUG nova.compute.manager [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:51:34 compute-0 kernel: tapfb3c52db-5a (unregistering): left promiscuous mode
Nov 22 08:51:34 compute-0 NetworkManager[56326]: <info>  [1763801494.2670] device (tapfb3c52db-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.277 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 ovn_controller[97783]: 2025-11-22T08:51:34Z|00147|binding|INFO|Releasing lport fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 from this chassis (sb_readonly=0)
Nov 22 08:51:34 compute-0 ovn_controller[97783]: 2025-11-22T08:51:34Z|00148|binding|INFO|Setting lport fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 down in Southbound
Nov 22 08:51:34 compute-0 ovn_controller[97783]: 2025-11-22T08:51:34Z|00149|binding|INFO|Removing iface tapfb3c52db-5a ovn-installed in OVS
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.281 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.285 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:af:f2 10.100.0.8'], port_security=['fa:16:3e:61:af:f2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31c28528-025f-461b-8564-53ea033211c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09d51c6de735419ea20d768f11d957d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fc146f5-b66a-4ffe-b46e-dbdf8c809e55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1f655b5-b928-4729-b932-12d31ad8ab6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.287 106642 INFO neutron.agent.ovn.metadata.agent [-] Port fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 in datapath 31c28528-025f-461b-8564-53ea033211c5 unbound from our chassis
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.289 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31c28528-025f-461b-8564-53ea033211c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.290 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[31b1d727-fd25-4c8a-af4f-b433ab9e988a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.291 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31c28528-025f-461b-8564-53ea033211c5 namespace which is not needed anymore
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.293 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 22 08:51:34 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 4.593s CPU time.
Nov 22 08:51:34 compute-0 systemd-machined[155703]: Machine qemu-14-instance-0000000d terminated.
Nov 22 08:51:34 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [NOTICE]   (253469) : haproxy version is 2.8.14-c23fe91
Nov 22 08:51:34 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [NOTICE]   (253469) : path to executable is /usr/sbin/haproxy
Nov 22 08:51:34 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [WARNING]  (253469) : Exiting Master process...
Nov 22 08:51:34 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [WARNING]  (253469) : Exiting Master process...
Nov 22 08:51:34 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [ALERT]    (253469) : Current worker (253471) exited with code 143 (Terminated)
Nov 22 08:51:34 compute-0 neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5[253465]: [WARNING]  (253469) : All workers exited. Exiting... (0)
Nov 22 08:51:34 compute-0 systemd[1]: libpod-f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90.scope: Deactivated successfully.
Nov 22 08:51:34 compute-0 conmon[253465]: conmon f0de5521f5f1c1bf007a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90.scope/container/memory.events
Nov 22 08:51:34 compute-0 podman[253549]: 2025-11-22 08:51:34.467860625 +0000 UTC m=+0.061623098 container died f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90-userdata-shm.mount: Deactivated successfully.
Nov 22 08:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-489cf289602a5a72d1f525c15dff6d4db4d1065c46cfc134a32b61b9fba7f3b2-merged.mount: Deactivated successfully.
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.522 189273 INFO nova.virt.libvirt.driver [-] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Instance destroyed successfully.
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.524 189273 DEBUG nova.objects.instance [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lazy-loading 'resources' on Instance uuid ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:51:34 compute-0 podman[253549]: 2025-11-22 08:51:34.527553919 +0000 UTC m=+0.121316392 container cleanup f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.536 189273 DEBUG nova.virt.libvirt.vif [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1242984221',display_name='tempest-ServerAddressesTestJSON-server-1242984221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1242984221',id=13,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:51:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09d51c6de735419ea20d768f11d957d9',ramdisk_id='',reservation_id='r-1ogszjgk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-44581399',owner_user_name='tempest-ServerAddressesTestJSON-44581399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:51:30Z,user_data=None,user_id='0a25c34d06a84df687860465cf2eada0',uuid=ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.537 189273 DEBUG nova.network.os_vif_util [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Converting VIF {"id": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "address": "fa:16:3e:61:af:f2", "network": {"id": "31c28528-025f-461b-8564-53ea033211c5", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2115214265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09d51c6de735419ea20d768f11d957d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb3c52db-5a", "ovs_interfaceid": "fb3c52db-5aeb-4b04-b9b9-fc119e8654c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.538 189273 DEBUG nova.network.os_vif_util [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:af:f2,bridge_name='br-int',has_traffic_filtering=True,id=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7,network=Network(31c28528-025f-461b-8564-53ea033211c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb3c52db-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.538 189273 DEBUG os_vif [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:af:f2,bridge_name='br-int',has_traffic_filtering=True,id=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7,network=Network(31c28528-025f-461b-8564-53ea033211c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb3c52db-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:51:34 compute-0 systemd[1]: libpod-conmon-f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90.scope: Deactivated successfully.
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.540 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.540 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb3c52db-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.542 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.545 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.550 189273 INFO os_vif [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:af:f2,bridge_name='br-int',has_traffic_filtering=True,id=fb3c52db-5aeb-4b04-b9b9-fc119e8654c7,network=Network(31c28528-025f-461b-8564-53ea033211c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb3c52db-5a')
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.550 189273 INFO nova.virt.libvirt.driver [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Deleting instance files /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1_del
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.551 189273 INFO nova.virt.libvirt.driver [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Deletion of /var/lib/nova/instances/ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1_del complete
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.573 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.574 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.623 189273 INFO nova.compute.manager [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Took 0.38 seconds to destroy the instance on the hypervisor.
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.623 189273 DEBUG oslo.service.loopingcall [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.624 189273 DEBUG nova.compute.manager [-] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.624 189273 DEBUG nova.network.neutron [-] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:51:34 compute-0 podman[253594]: 2025-11-22 08:51:34.631002071 +0000 UTC m=+0.071498583 container remove f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.638 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[35ebf1a7-5596-425e-bebb-6fbfb084b465]: (4, ('Sat Nov 22 08:51:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5 (f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90)\nf0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90\nSat Nov 22 08:51:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-31c28528-025f-461b-8564-53ea033211c5 (f0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90)\nf0de5521f5f1c1bf007a9d521dc5b43da2bab6eee97dc2bb40b882f6196bef90\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.640 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb38cb2-ba91-4ad9-bd9c-b4e9a8840801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.642 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31c28528-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:34 compute-0 kernel: tap31c28528-00: left promiscuous mode
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.648 189273 DEBUG nova.compute.manager [req-89c08411-fe5b-47a6-9c8a-97ea233bae10 req-7d74c750-427a-4619-a4a4-341e774dcf10 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received event network-vif-unplugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.649 189273 DEBUG oslo_concurrency.lockutils [req-89c08411-fe5b-47a6-9c8a-97ea233bae10 req-7d74c750-427a-4619-a4a4-341e774dcf10 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.649 189273 DEBUG oslo_concurrency.lockutils [req-89c08411-fe5b-47a6-9c8a-97ea233bae10 req-7d74c750-427a-4619-a4a4-341e774dcf10 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.649 189273 DEBUG oslo_concurrency.lockutils [req-89c08411-fe5b-47a6-9c8a-97ea233bae10 req-7d74c750-427a-4619-a4a4-341e774dcf10 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.649 189273 DEBUG nova.compute.manager [req-89c08411-fe5b-47a6-9c8a-97ea233bae10 req-7d74c750-427a-4619-a4a4-341e774dcf10 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] No waiting events found dispatching network-vif-unplugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.650 189273 DEBUG nova.compute.manager [req-89c08411-fe5b-47a6-9c8a-97ea233bae10 req-7d74c750-427a-4619-a4a4-341e774dcf10 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received event network-vif-unplugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.650 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 nova_compute[189268]: 2025-11-22 08:51:34.661 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.667 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[24d8e1b4-388c-4d8b-b173-ab612d8cb899]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.688 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1cd115-8738-4e2c-896d-18822bdde8fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.689 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9886f2-970b-4d66-8775-aa41b9b46319]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.711 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[372cdabb-ac3d-42d3-9120-148b21c3d7d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659842, 'reachable_time': 18056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253608, 'error': None, 'target': 'ovnmeta-31c28528-025f-461b-8564-53ea033211c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d31c28528\x2d025f\x2d461b\x2d8564\x2d53ea033211c5.mount: Deactivated successfully.
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.715 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31c28528-025f-461b-8564-53ea033211c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.715 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[71fe05b8-2bc7-4261-bcd8-ba007fd526cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:51:34 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:34.718 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.409 189273 DEBUG nova.network.neutron [-] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.427 189273 INFO nova.compute.manager [-] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Took 0.80 seconds to deallocate network for instance.
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.476 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.477 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.543 189273 DEBUG nova.compute.manager [req-3c0f1463-130c-4f53-adbc-5b4d464c0624 req-9603599e-0a3a-44c1-9b26-14f7d684dcc9 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received event network-vif-deleted-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.593 189273 DEBUG nova.compute.provider_tree [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.606 189273 DEBUG nova.scheduler.client.report [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.632 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.666 189273 INFO nova.scheduler.client.report [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Deleted allocations for instance ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1
Nov 22 08:51:35 compute-0 nova_compute[189268]: 2025-11-22 08:51:35.736 189273 DEBUG oslo_concurrency.lockutils [None req-495deace-9a0e-4c7e-8fee-965270d1acd1 0a25c34d06a84df687860465cf2eada0 09d51c6de735419ea20d768f11d957d9 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:36 compute-0 nova_compute[189268]: 2025-11-22 08:51:36.803 189273 DEBUG nova.compute.manager [req-c6d4f761-858d-40e7-990d-f6fdecac529b req-9389bcc1-3d83-4e90-943c-76276681777e 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received event network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:36 compute-0 nova_compute[189268]: 2025-11-22 08:51:36.804 189273 DEBUG oslo_concurrency.lockutils [req-c6d4f761-858d-40e7-990d-f6fdecac529b req-9389bcc1-3d83-4e90-943c-76276681777e 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:51:36 compute-0 nova_compute[189268]: 2025-11-22 08:51:36.805 189273 DEBUG oslo_concurrency.lockutils [req-c6d4f761-858d-40e7-990d-f6fdecac529b req-9389bcc1-3d83-4e90-943c-76276681777e 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:51:36 compute-0 nova_compute[189268]: 2025-11-22 08:51:36.805 189273 DEBUG oslo_concurrency.lockutils [req-c6d4f761-858d-40e7-990d-f6fdecac529b req-9389bcc1-3d83-4e90-943c-76276681777e 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:51:36 compute-0 nova_compute[189268]: 2025-11-22 08:51:36.806 189273 DEBUG nova.compute.manager [req-c6d4f761-858d-40e7-990d-f6fdecac529b req-9389bcc1-3d83-4e90-943c-76276681777e 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] No waiting events found dispatching network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:51:36 compute-0 nova_compute[189268]: 2025-11-22 08:51:36.806 189273 WARNING nova.compute.manager [req-c6d4f761-858d-40e7-990d-f6fdecac529b req-9389bcc1-3d83-4e90-943c-76276681777e 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Received unexpected event network-vif-plugged-fb3c52db-5aeb-4b04-b9b9-fc119e8654c7 for instance with vm_state deleted and task_state None.
Nov 22 08:51:37 compute-0 nova_compute[189268]: 2025-11-22 08:51:37.413 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:37 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:51:37.721 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:51:39 compute-0 nova_compute[189268]: 2025-11-22 08:51:39.543 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.067 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:51:41 compute-0 podman[253610]: 2025-11-22 08:51:41.120626496 +0000 UTC m=+0.074356389 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:51:41 compute-0 podman[253611]: 2025-11-22 08:51:41.125113207 +0000 UTC m=+0.072297124 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 08:51:41 compute-0 podman[253609]: 2025-11-22 08:51:41.147018876 +0000 UTC m=+0.101887630 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.453 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.455 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.455 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.456 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 38817707-1f5a-4596-bfd2-b48048331de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:51:41 compute-0 ovn_controller[97783]: 2025-11-22T08:51:41Z|00150|binding|INFO|Releasing lport 7ba31b4f-cb70-4305-a919-49ac9f8bddd1 from this chassis (sb_readonly=0)
Nov 22 08:51:41 compute-0 nova_compute[189268]: 2025-11-22 08:51:41.844 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:42 compute-0 ovn_controller[97783]: 2025-11-22T08:51:42Z|00151|binding|INFO|Releasing lport 7ba31b4f-cb70-4305-a919-49ac9f8bddd1 from this chassis (sb_readonly=0)
Nov 22 08:51:42 compute-0 nova_compute[189268]: 2025-11-22 08:51:42.209 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:42 compute-0 nova_compute[189268]: 2025-11-22 08:51:42.415 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:44 compute-0 nova_compute[189268]: 2025-11-22 08:51:44.545 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:45 compute-0 nova_compute[189268]: 2025-11-22 08:51:45.247 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updating instance_info_cache with network_info: [{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:45 compute-0 nova_compute[189268]: 2025-11-22 08:51:45.260 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:51:45 compute-0 nova_compute[189268]: 2025-11-22 08:51:45.261 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:51:45 compute-0 nova_compute[189268]: 2025-11-22 08:51:45.262 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:45 compute-0 nova_compute[189268]: 2025-11-22 08:51:45.262 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:51:46 compute-0 nova_compute[189268]: 2025-11-22 08:51:46.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:46 compute-0 nova_compute[189268]: 2025-11-22 08:51:46.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:46 compute-0 ovn_controller[97783]: 2025-11-22T08:51:46Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:15:7f 10.100.0.3
Nov 22 08:51:46 compute-0 ovn_controller[97783]: 2025-11-22T08:51:46Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:15:7f 10.100.0.3
Nov 22 08:51:47 compute-0 nova_compute[189268]: 2025-11-22 08:51:47.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:47 compute-0 nova_compute[189268]: 2025-11-22 08:51:47.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:47 compute-0 nova_compute[189268]: 2025-11-22 08:51:47.417 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:49 compute-0 nova_compute[189268]: 2025-11-22 08:51:49.515 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801494.5143015, ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:51:49 compute-0 nova_compute[189268]: 2025-11-22 08:51:49.516 189273 INFO nova.compute.manager [-] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] VM Stopped (Lifecycle Event)
Nov 22 08:51:49 compute-0 nova_compute[189268]: 2025-11-22 08:51:49.536 189273 DEBUG nova.compute.manager [None req-51c36edf-e245-4e59-b44a-319d985fd77c - - - - - -] [instance: ae1cc26e-2eb5-4bfe-a1bc-4a6b28f72de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:51:49 compute-0 nova_compute[189268]: 2025-11-22 08:51:49.548 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:51 compute-0 nova_compute[189268]: 2025-11-22 08:51:51.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:51 compute-0 podman[253685]: 2025-11-22 08:51:51.122654433 +0000 UTC m=+0.082475718 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 22 08:51:51 compute-0 podman[253686]: 2025-11-22 08:51:51.124285707 +0000 UTC m=+0.080009742 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:51:52 compute-0 nova_compute[189268]: 2025-11-22 08:51:52.421 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:52 compute-0 nova_compute[189268]: 2025-11-22 08:51:52.844 189273 INFO nova.compute.manager [None req-4c98fe6f-1db5-4f93-8ce7-cb9b738847fb 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Get console output
Nov 22 08:51:52 compute-0 nova_compute[189268]: 2025-11-22 08:51:52.943 239575 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 08:51:54 compute-0 podman[253722]: 2025-11-22 08:51:54.121751886 +0000 UTC m=+0.079469448 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.openshift.tags=base rhel9)
Nov 22 08:51:54 compute-0 podman[253723]: 2025-11-22 08:51:54.156848949 +0000 UTC m=+0.107193743 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 08:51:54 compute-0 nova_compute[189268]: 2025-11-22 08:51:54.551 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:55 compute-0 nova_compute[189268]: 2025-11-22 08:51:55.406 189273 DEBUG nova.compute.manager [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-changed-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:51:55 compute-0 nova_compute[189268]: 2025-11-22 08:51:55.407 189273 DEBUG nova.compute.manager [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Refreshing instance network info cache due to event network-changed-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:51:55 compute-0 nova_compute[189268]: 2025-11-22 08:51:55.407 189273 DEBUG oslo_concurrency.lockutils [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:51:55 compute-0 nova_compute[189268]: 2025-11-22 08:51:55.408 189273 DEBUG oslo_concurrency.lockutils [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:51:55 compute-0 nova_compute[189268]: 2025-11-22 08:51:55.408 189273 DEBUG nova.network.neutron [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Refreshing network info cache for port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:51:56 compute-0 nova_compute[189268]: 2025-11-22 08:51:56.889 189273 DEBUG nova.network.neutron [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updated VIF entry in instance network info cache for port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:51:56 compute-0 nova_compute[189268]: 2025-11-22 08:51:56.891 189273 DEBUG nova.network.neutron [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updating instance_info_cache with network_info: [{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:51:56 compute-0 nova_compute[189268]: 2025-11-22 08:51:56.936 189273 DEBUG oslo_concurrency.lockutils [req-a12af748-bb8e-4d57-bd8f-faa95e0b709f req-d7ecc96d-40f0-4232-a9a7-b2a58346b6d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:51:57 compute-0 nova_compute[189268]: 2025-11-22 08:51:57.423 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:59 compute-0 nova_compute[189268]: 2025-11-22 08:51:59.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:51:59 compute-0 podman[253765]: 2025-11-22 08:51:59.147928477 +0000 UTC m=+0.100115662 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 22 08:51:59 compute-0 nova_compute[189268]: 2025-11-22 08:51:59.552 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:51:59 compute-0 podman[203476]: time="2025-11-22T08:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:51:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:51:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.123 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.211 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:01 compute-0 podman[253786]: 2025-11-22 08:52:01.263613946 +0000 UTC m=+0.089520407 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.273 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.274 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.336 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:01 compute-0 openstack_network_exporter[205661]: ERROR   08:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:52:01 compute-0 openstack_network_exporter[205661]: ERROR   08:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:52:01 compute-0 openstack_network_exporter[205661]: ERROR   08:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:52:01 compute-0 openstack_network_exporter[205661]: ERROR   08:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:52:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:52:01 compute-0 openstack_network_exporter[205661]: ERROR   08:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:52:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.698 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.699 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5192MB free_disk=72.43178939819336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.700 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.700 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.776 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 38817707-1f5a-4596-bfd2-b48048331de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.777 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.777 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.823 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.837 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.953 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:52:01 compute-0 nova_compute[189268]: 2025-11-22 08:52:01.954 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:02 compute-0 nova_compute[189268]: 2025-11-22 08:52:02.426 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:04 compute-0 nova_compute[189268]: 2025-11-22 08:52:04.556 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:05 compute-0 nova_compute[189268]: 2025-11-22 08:52:05.913 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:05 compute-0 nova_compute[189268]: 2025-11-22 08:52:05.913 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:05 compute-0 nova_compute[189268]: 2025-11-22 08:52:05.931 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.010 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.010 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.018 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.018 189273 INFO nova.compute.claims [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.139 189273 DEBUG nova.compute.provider_tree [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.150 189273 DEBUG nova.scheduler.client.report [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.172 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.173 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.218 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.219 189273 DEBUG nova.network.neutron [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.236 189273 INFO nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.251 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.330 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.332 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.332 189273 INFO nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Creating image(s)
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.333 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "/var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.333 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "/var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.334 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "/var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.349 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.409 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.410 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.411 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.422 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.481 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.482 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.523 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad,backing_fmt=raw /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.524 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "e3659e0d5dc4ae82934981faa7447edd23aca3ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.524 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.585 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.586 189273 DEBUG nova.virt.disk.api [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Checking if we can resize image /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.586 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.649 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.650 189273 DEBUG nova.virt.disk.api [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Cannot resize image /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.651 189273 DEBUG nova.objects.instance [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lazy-loading 'migration_context' on Instance uuid 11f1996b-9b7f-4973-bd95-263ee88f2a2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.662 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.662 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Ensure instance console log exists: /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.663 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.663 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.664 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:06 compute-0 nova_compute[189268]: 2025-11-22 08:52:06.957 189273 DEBUG nova.policy [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '584cc3e3a5224a2e9a08273882841998', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b97da7a1b46046e59c36f5af412de432', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:52:07 compute-0 nova_compute[189268]: 2025-11-22 08:52:07.429 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:08 compute-0 nova_compute[189268]: 2025-11-22 08:52:08.975 189273 DEBUG nova.network.neutron [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Successfully created port: aae6fb4f-1301-4132-a140-67c2d72f334c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:52:09 compute-0 nova_compute[189268]: 2025-11-22 08:52:09.557 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:09.994 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:09.995 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:09.995 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:12 compute-0 podman[253832]: 2025-11-22 08:52:12.119629292 +0000 UTC m=+0.066622812 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:52:12 compute-0 podman[253833]: 2025-11-22 08:52:12.130903725 +0000 UTC m=+0.070427294 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 08:52:12 compute-0 podman[253831]: 2025-11-22 08:52:12.157019248 +0000 UTC m=+0.098821879 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:52:12 compute-0 nova_compute[189268]: 2025-11-22 08:52:12.431 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.218 189273 DEBUG nova.network.neutron [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Successfully updated port: aae6fb4f-1301-4132-a140-67c2d72f334c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.303 189273 DEBUG nova.compute.manager [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-changed-aae6fb4f-1301-4132-a140-67c2d72f334c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.303 189273 DEBUG nova.compute.manager [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Refreshing instance network info cache due to event network-changed-aae6fb4f-1301-4132-a140-67c2d72f334c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.304 189273 DEBUG oslo_concurrency.lockutils [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.304 189273 DEBUG oslo_concurrency.lockutils [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.304 189273 DEBUG nova.network.neutron [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Refreshing network info cache for port aae6fb4f-1301-4132-a140-67c2d72f334c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.339 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:52:13 compute-0 nova_compute[189268]: 2025-11-22 08:52:13.884 189273 DEBUG nova.network.neutron [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:52:14 compute-0 nova_compute[189268]: 2025-11-22 08:52:14.559 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:14 compute-0 nova_compute[189268]: 2025-11-22 08:52:14.791 189273 DEBUG nova.network.neutron [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:52:14 compute-0 nova_compute[189268]: 2025-11-22 08:52:14.806 189273 DEBUG oslo_concurrency.lockutils [req-e7bd00d1-5829-4ae8-972a-c5a7a13b007e req-59390fd7-e27d-4890-b186-42aa99c38223 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:52:14 compute-0 nova_compute[189268]: 2025-11-22 08:52:14.806 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquired lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:52:14 compute-0 nova_compute[189268]: 2025-11-22 08:52:14.807 189273 DEBUG nova.network.neutron [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:52:14 compute-0 nova_compute[189268]: 2025-11-22 08:52:14.944 189273 DEBUG nova.network.neutron [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.374 189273 DEBUG nova.network.neutron [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Updating instance_info_cache with network_info: [{"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.490 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Releasing lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.491 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Instance network_info: |[{"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.494 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Start _get_guest_xml network_info=[{"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.501 189273 WARNING nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.510 189273 DEBUG nova.virt.libvirt.host [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.511 189273 DEBUG nova.virt.libvirt.host [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.515 189273 DEBUG nova.virt.libvirt.host [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.516 189273 DEBUG nova.virt.libvirt.host [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.516 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.517 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:46:32Z,direct_url=<?>,disk_format='qcow2',id=ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='80e46844b3824928a6138235e5ede512',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:46:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.517 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.517 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.518 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.518 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.518 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.519 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.519 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.519 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.519 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.520 189273 DEBUG nova.virt.hardware [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.523 189273 DEBUG nova.virt.libvirt.vif [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:52:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872582748',display_name='tempest-TestNetworkBasicOps-server-1872582748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872582748',id=14,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJLIb76PnCaJEe4HChP2jiItWyMpby96mqJl49AemNpvNQl96CGTuk1xlLEu9oUiYnPfzKh+r0wHC94QvPJiTBVk9H9vnt/wqO/1H/DIS+I2JDpQmQ6QUZfCAf0cVYd9w==',key_name='tempest-TestNetworkBasicOps-1622596734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97da7a1b46046e59c36f5af412de432',ramdisk_id='',reservation_id='r-ty6kzn5h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1679658819',owner_user_name='tempest-TestNetworkBasicOps-1679658819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:52:06Z,user_data=None,user_id='584cc3e3a5224a2e9a08273882841998',uuid=11f1996b-9b7f-4973-bd95-263ee88f2a2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.523 189273 DEBUG nova.network.os_vif_util [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converting VIF {"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.524 189273 DEBUG nova.network.os_vif_util [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:9d:01,bridge_name='br-int',has_traffic_filtering=True,id=aae6fb4f-1301-4132-a140-67c2d72f334c,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaae6fb4f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.525 189273 DEBUG nova.objects.instance [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11f1996b-9b7f-4973-bd95-263ee88f2a2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.537 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <uuid>11f1996b-9b7f-4973-bd95-263ee88f2a2a</uuid>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <name>instance-0000000e</name>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <nova:name>tempest-TestNetworkBasicOps-server-1872582748</nova:name>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:52:16</nova:creationTime>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:user uuid="584cc3e3a5224a2e9a08273882841998">tempest-TestNetworkBasicOps-1679658819-project-member</nova:user>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:project uuid="b97da7a1b46046e59c36f5af412de432">tempest-TestNetworkBasicOps-1679658819</nova:project>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         <nova:port uuid="aae6fb4f-1301-4132-a140-67c2d72f334c">
Nov 22 08:52:16 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <system>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <entry name="serial">11f1996b-9b7f-4973-bd95-263ee88f2a2a</entry>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <entry name="uuid">11f1996b-9b7f-4973-bd95-263ee88f2a2a</entry>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </system>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <os>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   </os>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <features>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   </features>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.config"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:83:9d:01"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <target dev="tapaae6fb4f-13"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/console.log" append="off"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <video>
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </video>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:52:16 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:52:16 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:52:16 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:52:16 compute-0 nova_compute[189268]: </domain>
Nov 22 08:52:16 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.538 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Preparing to wait for external event network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.539 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.539 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.539 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.540 189273 DEBUG nova.virt.libvirt.vif [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:52:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872582748',display_name='tempest-TestNetworkBasicOps-server-1872582748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872582748',id=14,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJLIb76PnCaJEe4HChP2jiItWyMpby96mqJl49AemNpvNQl96CGTuk1xlLEu9oUiYnPfzKh+r0wHC94QvPJiTBVk9H9vnt/wqO/1H/DIS+I2JDpQmQ6QUZfCAf0cVYd9w==',key_name='tempest-TestNetworkBasicOps-1622596734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97da7a1b46046e59c36f5af412de432',ramdisk_id='',reservation_id='r-ty6kzn5h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1679658819',owner_user_name='tempest-TestNetworkBasicOps-1679658819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:52:06Z,user_data=None,user_id='584cc3e3a5224a2e9a08273882841998',uuid=11f1996b-9b7f-4973-bd95-263ee88f2a2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.540 189273 DEBUG nova.network.os_vif_util [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converting VIF {"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.541 189273 DEBUG nova.network.os_vif_util [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:9d:01,bridge_name='br-int',has_traffic_filtering=True,id=aae6fb4f-1301-4132-a140-67c2d72f334c,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaae6fb4f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.542 189273 DEBUG os_vif [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:9d:01,bridge_name='br-int',has_traffic_filtering=True,id=aae6fb4f-1301-4132-a140-67c2d72f334c,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaae6fb4f-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.542 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.543 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.543 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.547 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.548 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaae6fb4f-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.548 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaae6fb4f-13, col_values=(('external_ids', {'iface-id': 'aae6fb4f-1301-4132-a140-67c2d72f334c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:9d:01', 'vm-uuid': '11f1996b-9b7f-4973-bd95-263ee88f2a2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.550 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:16 compute-0 NetworkManager[56326]: <info>  [1763801536.5521] manager: (tapaae6fb4f-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.552 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.559 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.561 189273 INFO os_vif [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:9d:01,bridge_name='br-int',has_traffic_filtering=True,id=aae6fb4f-1301-4132-a140-67c2d72f334c,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaae6fb4f-13')
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.764 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.765 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.765 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] No VIF found with MAC fa:16:3e:83:9d:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:52:16 compute-0 nova_compute[189268]: 2025-11-22 08:52:16.766 189273 INFO nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Using config drive
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.169 189273 INFO nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Creating config drive at /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.config
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.175 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_8kev0n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.302 189273 DEBUG oslo_concurrency.processutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_8kev0n" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:52:17 compute-0 kernel: tapaae6fb4f-13: entered promiscuous mode
Nov 22 08:52:17 compute-0 ovn_controller[97783]: 2025-11-22T08:52:17Z|00152|binding|INFO|Claiming lport aae6fb4f-1301-4132-a140-67c2d72f334c for this chassis.
Nov 22 08:52:17 compute-0 ovn_controller[97783]: 2025-11-22T08:52:17Z|00153|binding|INFO|aae6fb4f-1301-4132-a140-67c2d72f334c: Claiming fa:16:3e:83:9d:01 10.100.0.4
Nov 22 08:52:17 compute-0 NetworkManager[56326]: <info>  [1763801537.3836] manager: (tapaae6fb4f-13): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.382 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:17 compute-0 ovn_controller[97783]: 2025-11-22T08:52:17Z|00154|binding|INFO|Setting lport aae6fb4f-1301-4132-a140-67c2d72f334c ovn-installed in OVS
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.403 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.405 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:17 compute-0 systemd-udevd[253908]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:52:17 compute-0 systemd-machined[155703]: New machine qemu-15-instance-0000000e.
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.432 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:17 compute-0 NetworkManager[56326]: <info>  [1763801537.4411] device (tapaae6fb4f-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:52:17 compute-0 NetworkManager[56326]: <info>  [1763801537.4423] device (tapaae6fb4f-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:52:17 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 22 08:52:17 compute-0 ovn_controller[97783]: 2025-11-22T08:52:17Z|00155|binding|INFO|Setting lport aae6fb4f-1301-4132-a140-67c2d72f334c up in Southbound
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.584 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:9d:01 10.100.0.4'], port_security=['fa:16:3e:83:9d:01 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1996b-9b7f-4973-bd95-263ee88f2a2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97da7a1b46046e59c36f5af412de432', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf0a9b02-16f7-4a24-a53f-04156062782f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42356185-0f5c-4367-9443-beeb712f6f09, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=aae6fb4f-1301-4132-a140-67c2d72f334c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.588 106642 INFO neutron.agent.ovn.metadata.agent [-] Port aae6fb4f-1301-4132-a140-67c2d72f334c in datapath 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 bound to our chassis
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.591 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.607 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[75bb6e2f-4b21-42bb-8ff6-82eaf61b78fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.636 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[4d4c780e-974c-4cb1-81b9-0ab8ab9f0dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.640 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[e78dc9cc-1877-47c7-8f31-6ba162e84d6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.665 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[a28d8cb0-2dee-40d0-9984-84244dbfc39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.681 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e8337502-8ce4-4373-a6b5-bbe9834e73a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cf0b2bb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:a1:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658013, 'reachable_time': 31479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253923, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.698 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[21d186b3-c94f-4326-b98d-eb9b9f46b008]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5cf0b2bb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658022, 'tstamp': 658022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253924, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cf0b2bb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658025, 'tstamp': 658025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253924, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.702 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cf0b2bb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.704 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.706 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.707 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cf0b2bb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.708 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.709 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cf0b2bb-a0, col_values=(('external_ids', {'iface-id': '7ba31b4f-cb70-4305-a919-49ac9f8bddd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:52:17 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:17.710 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:52:17 compute-0 ovn_controller[97783]: 2025-11-22T08:52:17Z|00156|binding|INFO|Releasing lport 7ba31b4f-cb70-4305-a919-49ac9f8bddd1 from this chassis (sb_readonly=0)
Nov 22 08:52:17 compute-0 nova_compute[189268]: 2025-11-22 08:52:17.917 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:18 compute-0 ovn_controller[97783]: 2025-11-22T08:52:18Z|00157|binding|INFO|Releasing lport 7ba31b4f-cb70-4305-a919-49ac9f8bddd1 from this chassis (sb_readonly=0)
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.013 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.113 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801538.1128967, 11f1996b-9b7f-4973-bd95-263ee88f2a2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.114 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] VM Started (Lifecycle Event)
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.139 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.145 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801538.113047, 11f1996b-9b7f-4973-bd95-263ee88f2a2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.145 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] VM Paused (Lifecycle Event)
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.160 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.165 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:52:18 compute-0 nova_compute[189268]: 2025-11-22 08:52:18.181 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.240 189273 DEBUG nova.compute.manager [req-677166f6-bad1-4995-9b83-ca78a6fab623 req-062ced64-0f4a-40b8-a548-25ea1c214d93 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.240 189273 DEBUG oslo_concurrency.lockutils [req-677166f6-bad1-4995-9b83-ca78a6fab623 req-062ced64-0f4a-40b8-a548-25ea1c214d93 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.240 189273 DEBUG oslo_concurrency.lockutils [req-677166f6-bad1-4995-9b83-ca78a6fab623 req-062ced64-0f4a-40b8-a548-25ea1c214d93 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.241 189273 DEBUG oslo_concurrency.lockutils [req-677166f6-bad1-4995-9b83-ca78a6fab623 req-062ced64-0f4a-40b8-a548-25ea1c214d93 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.241 189273 DEBUG nova.compute.manager [req-677166f6-bad1-4995-9b83-ca78a6fab623 req-062ced64-0f4a-40b8-a548-25ea1c214d93 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Processing event network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.241 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.245 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801540.24519, 11f1996b-9b7f-4973-bd95-263ee88f2a2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.246 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] VM Resumed (Lifecycle Event)
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.247 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.252 189273 INFO nova.virt.libvirt.driver [-] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Instance spawned successfully.
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.252 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.267 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.276 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.280 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.281 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.281 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.282 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.282 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.283 189273 DEBUG nova.virt.libvirt.driver [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.304 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.449 189273 INFO nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Took 14.12 seconds to spawn the instance on the hypervisor.
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.450 189273 DEBUG nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.544 189273 INFO nova.compute.manager [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Took 14.55 seconds to build instance.
Nov 22 08:52:20 compute-0 nova_compute[189268]: 2025-11-22 08:52:20.622 189273 DEBUG oslo_concurrency.lockutils [None req-a8291519-3b13-489e-a9cf-76c8da113368 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:21 compute-0 nova_compute[189268]: 2025-11-22 08:52:21.552 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.096 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.097 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.097 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.097 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.104 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 11f1996b-9b7f-4973-bd95-263ee88f2a2a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.105 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/11f1996b-9b7f-4973-bd95-263ee88f2a2a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:52:22 compute-0 podman[253934]: 2025-11-22 08:52:22.147123852 +0000 UTC m=+0.096918847 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 08:52:22 compute-0 podman[253933]: 2025-11-22 08:52:22.150606075 +0000 UTC m=+0.100869533 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 08:52:22 compute-0 nova_compute[189268]: 2025-11-22 08:52:22.435 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:22 compute-0 nova_compute[189268]: 2025-11-22 08:52:22.453 189273 DEBUG nova.compute.manager [req-31a4caaa-96e3-4a82-a8fd-5196cde6b647 req-445455e2-a12f-4af3-86ac-6eb141f0f4eb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:52:22 compute-0 nova_compute[189268]: 2025-11-22 08:52:22.457 189273 DEBUG oslo_concurrency.lockutils [req-31a4caaa-96e3-4a82-a8fd-5196cde6b647 req-445455e2-a12f-4af3-86ac-6eb141f0f4eb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:52:22 compute-0 nova_compute[189268]: 2025-11-22 08:52:22.457 189273 DEBUG oslo_concurrency.lockutils [req-31a4caaa-96e3-4a82-a8fd-5196cde6b647 req-445455e2-a12f-4af3-86ac-6eb141f0f4eb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:52:22 compute-0 nova_compute[189268]: 2025-11-22 08:52:22.458 189273 DEBUG oslo_concurrency.lockutils [req-31a4caaa-96e3-4a82-a8fd-5196cde6b647 req-445455e2-a12f-4af3-86ac-6eb141f0f4eb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:52:22 compute-0 nova_compute[189268]: 2025-11-22 08:52:22.458 189273 DEBUG nova.compute.manager [req-31a4caaa-96e3-4a82-a8fd-5196cde6b647 req-445455e2-a12f-4af3-86ac-6eb141f0f4eb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] No waiting events found dispatching network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:52:22 compute-0 nova_compute[189268]: 2025-11-22 08:52:22.458 189273 WARNING nova.compute.manager [req-31a4caaa-96e3-4a82-a8fd-5196cde6b647 req-445455e2-a12f-4af3-86ac-6eb141f0f4eb 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received unexpected event network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c for instance with vm_state active and task_state None.
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.615 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1852 Content-Type: application/json Date: Sat, 22 Nov 2025 08:52:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3bc95bfc-bf75-4cab-b56d-479f94db3e04 x-openstack-request-id: req-3bc95bfc-bf75-4cab-b56d-479f94db3e04 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.615 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "11f1996b-9b7f-4973-bd95-263ee88f2a2a", "name": "tempest-TestNetworkBasicOps-server-1872582748", "status": "ACTIVE", "tenant_id": "b97da7a1b46046e59c36f5af412de432", "user_id": "584cc3e3a5224a2e9a08273882841998", "metadata": {}, "hostId": "b9b98862ab6bb5de822965344e89e8d255d641c5dbb4d2394dc2806a", "image": {"id": "ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:52:05Z", "updated": "2025-11-22T08:52:20Z", "addresses": {"tempest-network-smoke--878622863": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:83:9d:01"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/11f1996b-9b7f-4973-bd95-263ee88f2a2a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/11f1996b-9b7f-4973-bd95-263ee88f2a2a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1622596734", "OS-SRV-USG:launched_at": "2025-11-22T08:52:20.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-419393523"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.615 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/11f1996b-9b7f-4973-bd95-263ee88f2a2a used request id req-3bc95bfc-bf75-4cab-b56d-479f94db3e04 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.616 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11f1996b-9b7f-4973-bd95-263ee88f2a2a', 'name': 'tempest-TestNetworkBasicOps-server-1872582748', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b97da7a1b46046e59c36f5af412de432', 'user_id': '584cc3e3a5224a2e9a08273882841998', 'hostId': 'b9b98862ab6bb5de822965344e89e8d255d641c5dbb4d2394dc2806a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.619 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 38817707-1f5a-4596-bfd2-b48048331de7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:52:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:22.620 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/38817707-1f5a-4596-bfd2-b48048331de7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.121 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1851 Content-Type: application/json Date: Sat, 22 Nov 2025 08:52:22 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5b6a021d-17d1-495a-bbc0-2e1f1cdd6842 x-openstack-request-id: req-5b6a021d-17d1-495a-bbc0-2e1f1cdd6842 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.121 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "38817707-1f5a-4596-bfd2-b48048331de7", "name": "tempest-TestNetworkBasicOps-server-472251035", "status": "ACTIVE", "tenant_id": "b97da7a1b46046e59c36f5af412de432", "user_id": "584cc3e3a5224a2e9a08273882841998", "metadata": {}, "hostId": "b9b98862ab6bb5de822965344e89e8d255d641c5dbb4d2394dc2806a", "image": {"id": "ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:50:52Z", "updated": "2025-11-22T08:51:07Z", "addresses": {"tempest-network-smoke--878622863": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7a:15:7f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/38817707-1f5a-4596-bfd2-b48048331de7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/38817707-1f5a-4596-bfd2-b48048331de7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-203402494", "OS-SRV-USG:launched_at": "2025-11-22T08:51:07.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1695355777"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.122 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/38817707-1f5a-4596-bfd2-b48048331de7 used request id req-5b6a021d-17d1-495a-bbc0-2e1f1cdd6842 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.123 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '38817707-1f5a-4596-bfd2-b48048331de7', 'name': 'tempest-TestNetworkBasicOps-server-472251035', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b97da7a1b46046e59c36f5af412de432', 'user_id': '584cc3e3a5224a2e9a08273882841998', 'hostId': 'b9b98862ab6bb5de822965344e89e8d255d641c5dbb4d2394dc2806a', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.123 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.123 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.123 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:52:23.124072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.128 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 11f1996b-9b7f-4973-bd95-263ee88f2a2a / tapaae6fb4f-13 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.129 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.133 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 38817707-1f5a-4596-bfd2-b48048331de7 / tap1a2be7e7-4a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.133 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.incoming.bytes volume: 20294 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.134 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.134 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.135 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.135 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.135 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:52:23.135158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.135 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.outgoing.packets volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.136 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.136 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.136 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.136 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.136 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.136 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.137 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.137 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:52:23.136744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.138 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.138 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.138 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.138 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.138 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.138 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.139 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.139 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.139 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.139 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:52:23.138337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.139 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.140 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:52:23.140057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.162 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/cpu volume: 2800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.194 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/cpu volume: 36450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.194 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.194 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.195 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.195 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.195 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.195 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.196 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:52:23.195193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.196 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.196 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.196 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.196 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:52:23.196476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.209 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.210 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.227 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.227 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.227 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.228 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.228 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.228 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.228 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:52:23.228404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.272 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.read.bytes volume: 9279488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.273 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.325 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.read.bytes volume: 30362112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.326 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.326 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.327 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.327 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.327 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.327 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.327 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.328 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.328 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:52:23.327777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.328 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.329 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.329 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.329 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.329 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.329 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.330 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.330 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.330 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.330 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.outgoing.bytes volume: 16018 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:52:23.330172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.331 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.331 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.331 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.331 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.331 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.332 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.332 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.332 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1872582748>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-472251035>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1872582748>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-472251035>]
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:52:23.331987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.333 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.333 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.333 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.333 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.333 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:52:23.333715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.334 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.read.latency volume: 1273115524 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.334 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.read.latency volume: 3345879 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.334 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.read.latency volume: 3292608461 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.335 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.read.latency volume: 558941335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.335 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.336 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.336 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.336 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.336 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.336 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.336 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:52:23.336596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.337 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.337 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.337 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.337 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.338 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.338 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.338 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:52:23.338304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.338 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.339 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.339 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.339 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.340 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.340 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.340 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.340 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.340 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.340 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.340 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.read.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.341 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.341 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.342 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.342 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.342 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.342 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.343 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.343 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:52:23.340787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.343 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.343 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:52:23.343687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.344 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.344 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.344 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.345 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.345 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.345 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.345 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.345 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.345 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.346 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.346 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:52:23.345902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.346 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.write.bytes volume: 72974336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.347 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.347 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.348 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.348 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.348 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.348 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.348 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.348 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.349 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.349 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.349 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.350 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.350 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:52:23.348516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.350 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:52:23.350511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.350 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.351 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.351 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.write.latency volume: 49263855334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.351 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.352 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.352 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.352 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.353 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.353 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.353 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:52:23.353581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.354 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.354 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.354 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.355 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.355 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.355 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.355 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:52:23.355368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.356 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.incoming.packets volume: 117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.356 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.357 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.357 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.357 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.357 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.357 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.358 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.358 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.358 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.358 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:52:23.357554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.358 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.359 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.359 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.360 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.360 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.361 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.361 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.361 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.361 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.361 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.362 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.362 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.363 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.363 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:52:23.359093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:52:23.361871) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.364 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.364 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.364 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.364 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.365 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:52:23.364867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.365 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1872582748>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-472251035>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1872582748>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-472251035>]
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.365 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.365 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.365 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.365 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.366 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.366 15 DEBUG ceilometer.compute.pollsters [-] 11f1996b-9b7f-4973-bd95-263ee88f2a2a/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:52:23.366014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.366 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 11f1996b-9b7f-4973-bd95-263ee88f2a2a: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.366 15 DEBUG ceilometer.compute.pollsters [-] 38817707-1f5a-4596-bfd2-b48048331de7/memory.usage volume: 46.546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.366 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.368 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.368 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.368 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.368 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.368 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.369 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.369 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.369 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.369 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.369 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.369 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.370 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.370 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.370 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.370 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.371 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.371 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.371 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.371 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.371 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.371 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.372 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.372 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:23 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:52:23.372 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:52:24 compute-0 NetworkManager[56326]: <info>  [1763801544.2335] manager: (patch-br-int-to-provnet-4626db62-a226-41d4-b94f-04168db037c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Nov 22 08:52:24 compute-0 NetworkManager[56326]: <info>  [1763801544.2347] manager: (patch-provnet-4626db62-a226-41d4-b94f-04168db037c0-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.234 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.307 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:24 compute-0 ovn_controller[97783]: 2025-11-22T08:52:24Z|00158|binding|INFO|Releasing lport 7ba31b4f-cb70-4305-a919-49ac9f8bddd1 from this chassis (sb_readonly=0)
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.318 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.886 189273 DEBUG nova.compute.manager [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-changed-aae6fb4f-1301-4132-a140-67c2d72f334c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.887 189273 DEBUG nova.compute.manager [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Refreshing instance network info cache due to event network-changed-aae6fb4f-1301-4132-a140-67c2d72f334c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.888 189273 DEBUG oslo_concurrency.lockutils [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.888 189273 DEBUG oslo_concurrency.lockutils [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:52:24 compute-0 nova_compute[189268]: 2025-11-22 08:52:24.888 189273 DEBUG nova.network.neutron [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Refreshing network info cache for port aae6fb4f-1301-4132-a140-67c2d72f334c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:52:25 compute-0 podman[253973]: 2025-11-22 08:52:25.125108326 +0000 UTC m=+0.078870773 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Nov 22 08:52:25 compute-0 podman[253974]: 2025-11-22 08:52:25.157200158 +0000 UTC m=+0.109283669 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 22 08:52:26 compute-0 nova_compute[189268]: 2025-11-22 08:52:26.556 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:27 compute-0 nova_compute[189268]: 2025-11-22 08:52:27.020 189273 DEBUG nova.network.neutron [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Updated VIF entry in instance network info cache for port aae6fb4f-1301-4132-a140-67c2d72f334c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:52:27 compute-0 nova_compute[189268]: 2025-11-22 08:52:27.021 189273 DEBUG nova.network.neutron [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Updating instance_info_cache with network_info: [{"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:52:27 compute-0 nova_compute[189268]: 2025-11-22 08:52:27.200 189273 DEBUG oslo_concurrency.lockutils [req-9c2209dc-c3c9-434a-a693-b8dd529d8650 req-82ff5673-3a77-4896-874f-59c147923ba2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-11f1996b-9b7f-4973-bd95-263ee88f2a2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:52:27 compute-0 nova_compute[189268]: 2025-11-22 08:52:27.438 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:29 compute-0 podman[203476]: time="2025-11-22T08:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:52:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:52:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 22 08:52:30 compute-0 podman[254018]: 2025-11-22 08:52:30.13340731 +0000 UTC m=+0.080428519 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Nov 22 08:52:31 compute-0 openstack_network_exporter[205661]: ERROR   08:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:52:31 compute-0 openstack_network_exporter[205661]: ERROR   08:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:52:31 compute-0 openstack_network_exporter[205661]: ERROR   08:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:52:31 compute-0 openstack_network_exporter[205661]: ERROR   08:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:52:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:52:31 compute-0 openstack_network_exporter[205661]: ERROR   08:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:52:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:52:31 compute-0 nova_compute[189268]: 2025-11-22 08:52:31.558 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:32 compute-0 podman[254038]: 2025-11-22 08:52:32.107358478 +0000 UTC m=+0.058046643 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:52:32 compute-0 nova_compute[189268]: 2025-11-22 08:52:32.442 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:36 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:36.284 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:52:36 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:36.286 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:52:36 compute-0 nova_compute[189268]: 2025-11-22 08:52:36.287 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:36 compute-0 nova_compute[189268]: 2025-11-22 08:52:36.561 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:37 compute-0 nova_compute[189268]: 2025-11-22 08:52:37.442 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:40 compute-0 nova_compute[189268]: 2025-11-22 08:52:40.955 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:41 compute-0 nova_compute[189268]: 2025-11-22 08:52:41.567 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:52:42 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:52:42.290 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.372 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.372 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.374 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.374 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 38817707-1f5a-4596-bfd2-b48048331de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:52:42 compute-0 nova_compute[189268]: 2025-11-22 08:52:42.445 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:43 compute-0 podman[254061]: 2025-11-22 08:52:43.123493736 +0000 UTC m=+0.072266381 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:52:43 compute-0 podman[254062]: 2025-11-22 08:52:43.129063924 +0000 UTC m=+0.075020154 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:52:43 compute-0 podman[254063]: 2025-11-22 08:52:43.145183103 +0000 UTC m=+0.085265857 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:52:43 compute-0 nova_compute[189268]: 2025-11-22 08:52:43.917 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updating instance_info_cache with network_info: [{"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:52:43 compute-0 nova_compute[189268]: 2025-11-22 08:52:43.936 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-38817707-1f5a-4596-bfd2-b48048331de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:52:43 compute-0 nova_compute[189268]: 2025-11-22 08:52:43.937 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:52:44 compute-0 nova_compute[189268]: 2025-11-22 08:52:44.933 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:46 compute-0 nova_compute[189268]: 2025-11-22 08:52:46.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:46 compute-0 nova_compute[189268]: 2025-11-22 08:52:46.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:52:46 compute-0 nova_compute[189268]: 2025-11-22 08:52:46.351 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:46 compute-0 nova_compute[189268]: 2025-11-22 08:52:46.568 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:47 compute-0 nova_compute[189268]: 2025-11-22 08:52:47.447 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:48 compute-0 nova_compute[189268]: 2025-11-22 08:52:48.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:48 compute-0 nova_compute[189268]: 2025-11-22 08:52:48.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:48 compute-0 nova_compute[189268]: 2025-11-22 08:52:48.102 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:49 compute-0 nova_compute[189268]: 2025-11-22 08:52:49.101 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:51 compute-0 nova_compute[189268]: 2025-11-22 08:52:51.574 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:52 compute-0 nova_compute[189268]: 2025-11-22 08:52:52.451 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:52 compute-0 nova_compute[189268]: 2025-11-22 08:52:52.944 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:53 compute-0 nova_compute[189268]: 2025-11-22 08:52:53.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:52:53 compute-0 podman[254120]: 2025-11-22 08:52:53.146105411 +0000 UTC m=+0.096251950 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true)
Nov 22 08:52:53 compute-0 podman[254121]: 2025-11-22 08:52:53.149182642 +0000 UTC m=+0.094573485 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:52:53 compute-0 nova_compute[189268]: 2025-11-22 08:52:53.778 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:56 compute-0 podman[254162]: 2025-11-22 08:52:56.164265649 +0000 UTC m=+0.110025815 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 08:52:56 compute-0 podman[254161]: 2025-11-22 08:52:56.174333936 +0000 UTC m=+0.110391764 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, container_name=kepler, name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543)
Nov 22 08:52:56 compute-0 nova_compute[189268]: 2025-11-22 08:52:56.578 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:57 compute-0 nova_compute[189268]: 2025-11-22 08:52:57.453 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:52:58 compute-0 ovn_controller[97783]: 2025-11-22T08:52:58Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:9d:01 10.100.0.4
Nov 22 08:52:58 compute-0 ovn_controller[97783]: 2025-11-22T08:52:58Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:9d:01 10.100.0.4
Nov 22 08:52:59 compute-0 podman[203476]: time="2025-11-22T08:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:52:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:52:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 22 08:53:01 compute-0 podman[254215]: 2025-11-22 08:53:01.109231492 +0000 UTC m=+0.064303040 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, vendor=Red Hat, Inc.)
Nov 22 08:53:01 compute-0 openstack_network_exporter[205661]: ERROR   08:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:53:01 compute-0 openstack_network_exporter[205661]: ERROR   08:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:53:01 compute-0 openstack_network_exporter[205661]: ERROR   08:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:53:01 compute-0 openstack_network_exporter[205661]: ERROR   08:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:53:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:53:01 compute-0 openstack_network_exporter[205661]: ERROR   08:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:53:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:53:01 compute-0 nova_compute[189268]: 2025-11-22 08:53:01.707 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:02 compute-0 nova_compute[189268]: 2025-11-22 08:53:02.455 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:03 compute-0 podman[254236]: 2025-11-22 08:53:03.130832066 +0000 UTC m=+0.086324785 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.138 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.225 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.296 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.297 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.359 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.367 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.437 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.438 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.503 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.865 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.866 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5003MB free_disk=72.40313720703125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.867 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:03 compute-0 nova_compute[189268]: 2025-11-22 08:53:03.867 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.063 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 38817707-1f5a-4596-bfd2-b48048331de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.063 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 11f1996b-9b7f-4973-bd95-263ee88f2a2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.064 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.064 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.133 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.148 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.170 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.171 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.864 189273 INFO nova.compute.manager [None req-97e714b0-e31a-487a-a51f-25f82009b0ff 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Get console output
Nov 22 08:53:04 compute-0 nova_compute[189268]: 2025-11-22 08:53:04.872 239575 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.180 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.181 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.181 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.181 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.182 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.183 189273 INFO nova.compute.manager [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Terminating instance
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.184 189273 DEBUG nova.compute.manager [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:53:05 compute-0 kernel: tapaae6fb4f-13 (unregistering): left promiscuous mode
Nov 22 08:53:05 compute-0 NetworkManager[56326]: <info>  [1763801585.2204] device (tapaae6fb4f-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.232 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 ovn_controller[97783]: 2025-11-22T08:53:05Z|00159|binding|INFO|Releasing lport aae6fb4f-1301-4132-a140-67c2d72f334c from this chassis (sb_readonly=0)
Nov 22 08:53:05 compute-0 ovn_controller[97783]: 2025-11-22T08:53:05Z|00160|binding|INFO|Setting lport aae6fb4f-1301-4132-a140-67c2d72f334c down in Southbound
Nov 22 08:53:05 compute-0 ovn_controller[97783]: 2025-11-22T08:53:05Z|00161|binding|INFO|Removing iface tapaae6fb4f-13 ovn-installed in OVS
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.239 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.245 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:9d:01 10.100.0.4'], port_security=['fa:16:3e:83:9d:01 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1996b-9b7f-4973-bd95-263ee88f2a2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97da7a1b46046e59c36f5af412de432', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf0a9b02-16f7-4a24-a53f-04156062782f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42356185-0f5c-4367-9443-beeb712f6f09, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=aae6fb4f-1301-4132-a140-67c2d72f334c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.246 106642 INFO neutron.agent.ovn.metadata.agent [-] Port aae6fb4f-1301-4132-a140-67c2d72f334c in datapath 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 unbound from our chassis
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.248 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.266 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.266 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3576a5-d2f6-4d9b-b531-0248911f9700]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:05 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 22 08:53:05 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 40.407s CPU time.
Nov 22 08:53:05 compute-0 systemd-machined[155703]: Machine qemu-15-instance-0000000e terminated.
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.294 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[9d45336d-8e9a-427a-b43d-59dd93c75496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.298 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6918ee-eede-4e26-ab49-e848091d77b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.326 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[333b76bf-2ceb-47c3-8741-65cbde3f7d19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.345 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1e238cf0-8af8-4f1a-98c1-50b1cb420195]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cf0b2bb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:a1:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658013, 'reachable_time': 31479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254284, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.364 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[64de3ba5-77f4-4ed5-bcac-66f8858842ce]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5cf0b2bb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658022, 'tstamp': 658022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254285, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cf0b2bb-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658025, 'tstamp': 658025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254285, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.367 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cf0b2bb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.369 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.375 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.376 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cf0b2bb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.376 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.377 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cf0b2bb-a0, col_values=(('external_ids', {'iface-id': '7ba31b4f-cb70-4305-a919-49ac9f8bddd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:05 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:05.378 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.458 189273 INFO nova.virt.libvirt.driver [-] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Instance destroyed successfully.
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.459 189273 DEBUG nova.objects.instance [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lazy-loading 'resources' on Instance uuid 11f1996b-9b7f-4973-bd95-263ee88f2a2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.474 189273 DEBUG nova.virt.libvirt.vif [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:52:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872582748',display_name='tempest-TestNetworkBasicOps-server-1872582748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872582748',id=14,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJLIb76PnCaJEe4HChP2jiItWyMpby96mqJl49AemNpvNQl96CGTuk1xlLEu9oUiYnPfzKh+r0wHC94QvPJiTBVk9H9vnt/wqO/1H/DIS+I2JDpQmQ6QUZfCAf0cVYd9w==',key_name='tempest-TestNetworkBasicOps-1622596734',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:52:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b97da7a1b46046e59c36f5af412de432',ramdisk_id='',reservation_id='r-ty6kzn5h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1679658819',owner_user_name='tempest-TestNetworkBasicOps-1679658819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:52:20Z,user_data=None,user_id='584cc3e3a5224a2e9a08273882841998',uuid=11f1996b-9b7f-4973-bd95-263ee88f2a2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.475 189273 DEBUG nova.network.os_vif_util [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converting VIF {"id": "aae6fb4f-1301-4132-a140-67c2d72f334c", "address": "fa:16:3e:83:9d:01", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaae6fb4f-13", "ovs_interfaceid": "aae6fb4f-1301-4132-a140-67c2d72f334c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.476 189273 DEBUG nova.network.os_vif_util [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:9d:01,bridge_name='br-int',has_traffic_filtering=True,id=aae6fb4f-1301-4132-a140-67c2d72f334c,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaae6fb4f-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.476 189273 DEBUG os_vif [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:9d:01,bridge_name='br-int',has_traffic_filtering=True,id=aae6fb4f-1301-4132-a140-67c2d72f334c,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaae6fb4f-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.478 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.478 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae6fb4f-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.480 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.482 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.485 189273 INFO os_vif [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:9d:01,bridge_name='br-int',has_traffic_filtering=True,id=aae6fb4f-1301-4132-a140-67c2d72f334c,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaae6fb4f-13')
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.486 189273 INFO nova.virt.libvirt.driver [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Deleting instance files /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a_del
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.486 189273 INFO nova.virt.libvirt.driver [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Deletion of /var/lib/nova/instances/11f1996b-9b7f-4973-bd95-263ee88f2a2a_del complete
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.516 189273 DEBUG nova.compute.manager [req-0624ac4d-4d6f-4ff0-bbed-c556831870a1 req-6e91f024-6caf-439a-9ef9-eb19f55c0687 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-vif-unplugged-aae6fb4f-1301-4132-a140-67c2d72f334c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.516 189273 DEBUG oslo_concurrency.lockutils [req-0624ac4d-4d6f-4ff0-bbed-c556831870a1 req-6e91f024-6caf-439a-9ef9-eb19f55c0687 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.516 189273 DEBUG oslo_concurrency.lockutils [req-0624ac4d-4d6f-4ff0-bbed-c556831870a1 req-6e91f024-6caf-439a-9ef9-eb19f55c0687 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.517 189273 DEBUG oslo_concurrency.lockutils [req-0624ac4d-4d6f-4ff0-bbed-c556831870a1 req-6e91f024-6caf-439a-9ef9-eb19f55c0687 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.517 189273 DEBUG nova.compute.manager [req-0624ac4d-4d6f-4ff0-bbed-c556831870a1 req-6e91f024-6caf-439a-9ef9-eb19f55c0687 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] No waiting events found dispatching network-vif-unplugged-aae6fb4f-1301-4132-a140-67c2d72f334c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.517 189273 DEBUG nova.compute.manager [req-0624ac4d-4d6f-4ff0-bbed-c556831870a1 req-6e91f024-6caf-439a-9ef9-eb19f55c0687 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-vif-unplugged-aae6fb4f-1301-4132-a140-67c2d72f334c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.552 189273 INFO nova.compute.manager [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Took 0.37 seconds to destroy the instance on the hypervisor.
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.552 189273 DEBUG oslo.service.loopingcall [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.552 189273 DEBUG nova.compute.manager [-] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:53:05 compute-0 nova_compute[189268]: 2025-11-22 08:53:05.553 189273 DEBUG nova.network.neutron [-] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:53:06 compute-0 nova_compute[189268]: 2025-11-22 08:53:06.017 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:06 compute-0 nova_compute[189268]: 2025-11-22 08:53:06.503 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.176 189273 DEBUG nova.network.neutron [-] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.200 189273 INFO nova.compute.manager [-] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Took 1.65 seconds to deallocate network for instance.
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.245 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.245 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.284 189273 DEBUG nova.compute.manager [req-64c65610-e106-4966-841c-a742801c3044 req-e80bf344-e5fd-4008-9c07-7cdb63b8c252 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-vif-deleted-aae6fb4f-1301-4132-a140-67c2d72f334c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.325 189273 DEBUG nova.compute.provider_tree [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.343 189273 DEBUG nova.scheduler.client.report [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.368 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.411 189273 INFO nova.scheduler.client.report [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Deleted allocations for instance 11f1996b-9b7f-4973-bd95-263ee88f2a2a
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.457 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.496 189273 DEBUG oslo_concurrency.lockutils [None req-b01c436d-e159-41d1-afcf-23c5a00bb61f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.604 189273 DEBUG nova.compute.manager [req-b04a1a22-67f2-4df9-85dc-1263f519042d req-0ae9a0be-97e8-47f2-9c9f-19f478227e45 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received event network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.604 189273 DEBUG oslo_concurrency.lockutils [req-b04a1a22-67f2-4df9-85dc-1263f519042d req-0ae9a0be-97e8-47f2-9c9f-19f478227e45 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.605 189273 DEBUG oslo_concurrency.lockutils [req-b04a1a22-67f2-4df9-85dc-1263f519042d req-0ae9a0be-97e8-47f2-9c9f-19f478227e45 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.605 189273 DEBUG oslo_concurrency.lockutils [req-b04a1a22-67f2-4df9-85dc-1263f519042d req-0ae9a0be-97e8-47f2-9c9f-19f478227e45 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "11f1996b-9b7f-4973-bd95-263ee88f2a2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.605 189273 DEBUG nova.compute.manager [req-b04a1a22-67f2-4df9-85dc-1263f519042d req-0ae9a0be-97e8-47f2-9c9f-19f478227e45 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] No waiting events found dispatching network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:53:07 compute-0 nova_compute[189268]: 2025-11-22 08:53:07.605 189273 WARNING nova.compute.manager [req-b04a1a22-67f2-4df9-85dc-1263f519042d req-0ae9a0be-97e8-47f2-9c9f-19f478227e45 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Received unexpected event network-vif-plugged-aae6fb4f-1301-4132-a140-67c2d72f334c for instance with vm_state deleted and task_state None.
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.288 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.288 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.288 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.289 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.289 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.290 189273 INFO nova.compute.manager [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Terminating instance
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.292 189273 DEBUG nova.compute.manager [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 08:53:09 compute-0 kernel: tap1a2be7e7-4a (unregistering): left promiscuous mode
Nov 22 08:53:09 compute-0 NetworkManager[56326]: <info>  [1763801589.3265] device (tap1a2be7e7-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 08:53:09 compute-0 ovn_controller[97783]: 2025-11-22T08:53:09Z|00162|binding|INFO|Releasing lport 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d from this chassis (sb_readonly=0)
Nov 22 08:53:09 compute-0 ovn_controller[97783]: 2025-11-22T08:53:09Z|00163|binding|INFO|Setting lport 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d down in Southbound
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.328 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:09 compute-0 ovn_controller[97783]: 2025-11-22T08:53:09Z|00164|binding|INFO|Removing iface tap1a2be7e7-4a ovn-installed in OVS
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.341 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:15:7f 10.100.0.3'], port_security=['fa:16:3e:7a:15:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '38817707-1f5a-4596-bfd2-b48048331de7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97da7a1b46046e59c36f5af412de432', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04ad741a-81e1-45be-b72e-4b39973817da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42356185-0f5c-4367-9443-beeb712f6f09, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.343 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 1a2be7e7-4a90-44c8-bdf7-adac66f1e84d in datapath 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 unbound from our chassis
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.345 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.348 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.346 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5215bc-690c-425f-82f0-080ca28dd03a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.350 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 namespace which is not needed anymore
Nov 22 08:53:09 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 08:53:09 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 48.675s CPU time.
Nov 22 08:53:09 compute-0 systemd-machined[155703]: Machine qemu-13-instance-0000000c terminated.
Nov 22 08:53:09 compute-0 neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3[253180]: [NOTICE]   (253185) : haproxy version is 2.8.14-c23fe91
Nov 22 08:53:09 compute-0 neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3[253180]: [NOTICE]   (253185) : path to executable is /usr/sbin/haproxy
Nov 22 08:53:09 compute-0 neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3[253180]: [WARNING]  (253185) : Exiting Master process...
Nov 22 08:53:09 compute-0 neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3[253180]: [ALERT]    (253185) : Current worker (253187) exited with code 143 (Terminated)
Nov 22 08:53:09 compute-0 neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3[253180]: [WARNING]  (253185) : All workers exited. Exiting... (0)
Nov 22 08:53:09 compute-0 systemd[1]: libpod-e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072.scope: Deactivated successfully.
Nov 22 08:53:09 compute-0 podman[254330]: 2025-11-22 08:53:09.518615234 +0000 UTC m=+0.057455988 container died e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.552 189273 INFO nova.virt.libvirt.driver [-] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Instance destroyed successfully.
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.553 189273 DEBUG nova.objects.instance [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lazy-loading 'resources' on Instance uuid 38817707-1f5a-4596-bfd2-b48048331de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072-userdata-shm.mount: Deactivated successfully.
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.574 189273 DEBUG nova.virt.libvirt.vif [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-472251035',display_name='tempest-TestNetworkBasicOps-server-472251035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-472251035',id=12,image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL9vycT5NJv7h5GytTrKsGClvziWtZCPE2ibnv98G7plGcyXOOnBvQoSMG5BU87Xual/uEqsQJDZ+kok1766O/+Mm3LWOYUghijS4tCtVJk5eyI0zce0gefqvKXvW6kXXQ==',key_name='tempest-TestNetworkBasicOps-203402494',keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:51:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b97da7a1b46046e59c36f5af412de432',ramdisk_id='',reservation_id='r-bfgkwdxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ce3bf941-aae6-43cc-92e1-b0eff9cc9fbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1679658819',owner_user_name='tempest-TestNetworkBasicOps-1679658819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:51:07Z,user_data=None,user_id='584cc3e3a5224a2e9a08273882841998',uuid=38817707-1f5a-4596-bfd2-b48048331de7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.575 189273 DEBUG nova.network.os_vif_util [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converting VIF {"id": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "address": "fa:16:3e:7a:15:7f", "network": {"id": "5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3", "bridge": "br-int", "label": "tempest-network-smoke--878622863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97da7a1b46046e59c36f5af412de432", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a2be7e7-4a", "ovs_interfaceid": "1a2be7e7-4a90-44c8-bdf7-adac66f1e84d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b1277e07964988bdc9ea576564fa9dd5d76dd5113bc767b48f819d067085ff-merged.mount: Deactivated successfully.
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.577 189273 DEBUG nova.network.os_vif_util [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:15:7f,bridge_name='br-int',has_traffic_filtering=True,id=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a2be7e7-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.578 189273 DEBUG os_vif [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:15:7f,bridge_name='br-int',has_traffic_filtering=True,id=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a2be7e7-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.580 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.580 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a2be7e7-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.584 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.587 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.589 189273 INFO os_vif [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:15:7f,bridge_name='br-int',has_traffic_filtering=True,id=1a2be7e7-4a90-44c8-bdf7-adac66f1e84d,network=Network(5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a2be7e7-4a')
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.589 189273 INFO nova.virt.libvirt.driver [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Deleting instance files /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7_del
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.590 189273 INFO nova.virt.libvirt.driver [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Deletion of /var/lib/nova/instances/38817707-1f5a-4596-bfd2-b48048331de7_del complete
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.624 189273 DEBUG nova.compute.manager [req-9fd534c3-f857-45d9-9e2e-f8242698cec9 req-86277de9-6792-4702-8a27-548e29996e00 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-vif-unplugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.625 189273 DEBUG oslo_concurrency.lockutils [req-9fd534c3-f857-45d9-9e2e-f8242698cec9 req-86277de9-6792-4702-8a27-548e29996e00 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.625 189273 DEBUG oslo_concurrency.lockutils [req-9fd534c3-f857-45d9-9e2e-f8242698cec9 req-86277de9-6792-4702-8a27-548e29996e00 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.626 189273 DEBUG oslo_concurrency.lockutils [req-9fd534c3-f857-45d9-9e2e-f8242698cec9 req-86277de9-6792-4702-8a27-548e29996e00 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.626 189273 DEBUG nova.compute.manager [req-9fd534c3-f857-45d9-9e2e-f8242698cec9 req-86277de9-6792-4702-8a27-548e29996e00 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] No waiting events found dispatching network-vif-unplugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.627 189273 DEBUG nova.compute.manager [req-9fd534c3-f857-45d9-9e2e-f8242698cec9 req-86277de9-6792-4702-8a27-548e29996e00 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-vif-unplugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 08:53:09 compute-0 podman[254330]: 2025-11-22 08:53:09.653265202 +0000 UTC m=+0.192105956 container cleanup e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 08:53:09 compute-0 systemd[1]: libpod-conmon-e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072.scope: Deactivated successfully.
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.694 189273 INFO nova.compute.manager [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Took 0.40 seconds to destroy the instance on the hypervisor.
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.695 189273 DEBUG oslo.service.loopingcall [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.695 189273 DEBUG nova.compute.manager [-] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.696 189273 DEBUG nova.network.neutron [-] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 08:53:09 compute-0 podman[254378]: 2025-11-22 08:53:09.733146035 +0000 UTC m=+0.053548004 container remove e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.742 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[c26c5431-6a9b-4da2-80ae-19e6b92e7572]: (4, ('Sat Nov 22 08:53:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 (e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072)\ne3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072\nSat Nov 22 08:53:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 (e3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072)\ne3658ab95dc0e6ee335f13a59651e35fb9a9ca0407e21e530ca321d3c8292072\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.744 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[55f274ad-8cb9-46c4-a434-845f3d26b92e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.745 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cf0b2bb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.747 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:09 compute-0 kernel: tap5cf0b2bb-a0: left promiscuous mode
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.759 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:09 compute-0 nova_compute[189268]: 2025-11-22 08:53:09.762 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.763 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[47e8058b-12d1-4176-8e0a-d607b46a1527]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.779 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3b746b0c-c435-4f1f-9fbf-04897692ab18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.780 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3fcf15-3c6d-45cc-bde6-0175c102b8f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.795 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[a81e083d-60fc-4765-b3b4-67878a4d68f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658006, 'reachable_time': 19217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254392, 'error': None, 'target': 'ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.798 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cf0b2bb-abeb-4c7c-9b76-c685a9cea8c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 08:53:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d5cf0b2bb\x2dabeb\x2d4c7c\x2d9b76\x2dc685a9cea8c3.mount: Deactivated successfully.
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.798 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[b20ae7dc-3587-4358-a246-b6b1a84f4ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.995 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.996 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:09.996 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.225 189273 DEBUG nova.network.neutron [-] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.244 189273 INFO nova.compute.manager [-] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Took 0.55 seconds to deallocate network for instance.
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.303 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.304 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.310 189273 DEBUG nova.compute.manager [req-4b44362e-5a30-4535-a278-08bf0b4f583a req-c2a5f422-651f-4b20-9210-09845bd4e526 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-vif-deleted-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.368 189273 DEBUG nova.compute.provider_tree [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.388 189273 DEBUG nova.scheduler.client.report [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.435 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.595 189273 INFO nova.scheduler.client.report [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Deleted allocations for instance 38817707-1f5a-4596-bfd2-b48048331de7
Nov 22 08:53:10 compute-0 nova_compute[189268]: 2025-11-22 08:53:10.652 189273 DEBUG oslo_concurrency.lockutils [None req-3654b596-b6e5-43d6-8824-2e1614a7c11f 584cc3e3a5224a2e9a08273882841998 b97da7a1b46046e59c36f5af412de432 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:11 compute-0 nova_compute[189268]: 2025-11-22 08:53:11.756 189273 DEBUG nova.compute.manager [req-8dcb0149-1e02-4dff-beb8-c07f8d0b876b req-ac298d15-f086-451c-84d7-56899f6b1a1a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received event network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:11 compute-0 nova_compute[189268]: 2025-11-22 08:53:11.756 189273 DEBUG oslo_concurrency.lockutils [req-8dcb0149-1e02-4dff-beb8-c07f8d0b876b req-ac298d15-f086-451c-84d7-56899f6b1a1a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "38817707-1f5a-4596-bfd2-b48048331de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:11 compute-0 nova_compute[189268]: 2025-11-22 08:53:11.757 189273 DEBUG oslo_concurrency.lockutils [req-8dcb0149-1e02-4dff-beb8-c07f8d0b876b req-ac298d15-f086-451c-84d7-56899f6b1a1a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:11 compute-0 nova_compute[189268]: 2025-11-22 08:53:11.757 189273 DEBUG oslo_concurrency.lockutils [req-8dcb0149-1e02-4dff-beb8-c07f8d0b876b req-ac298d15-f086-451c-84d7-56899f6b1a1a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "38817707-1f5a-4596-bfd2-b48048331de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:11 compute-0 nova_compute[189268]: 2025-11-22 08:53:11.757 189273 DEBUG nova.compute.manager [req-8dcb0149-1e02-4dff-beb8-c07f8d0b876b req-ac298d15-f086-451c-84d7-56899f6b1a1a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] No waiting events found dispatching network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:53:11 compute-0 nova_compute[189268]: 2025-11-22 08:53:11.758 189273 WARNING nova.compute.manager [req-8dcb0149-1e02-4dff-beb8-c07f8d0b876b req-ac298d15-f086-451c-84d7-56899f6b1a1a 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Received unexpected event network-vif-plugged-1a2be7e7-4a90-44c8-bdf7-adac66f1e84d for instance with vm_state deleted and task_state None.
Nov 22 08:53:12 compute-0 nova_compute[189268]: 2025-11-22 08:53:12.460 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:13 compute-0 nova_compute[189268]: 2025-11-22 08:53:13.847 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:14 compute-0 nova_compute[189268]: 2025-11-22 08:53:14.082 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:14 compute-0 podman[254396]: 2025-11-22 08:53:14.16225543 +0000 UTC m=+0.067187547 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:53:14 compute-0 podman[254395]: 2025-11-22 08:53:14.183585587 +0000 UTC m=+0.092111459 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:53:14 compute-0 podman[254393]: 2025-11-22 08:53:14.184192292 +0000 UTC m=+0.101786705 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:53:14 compute-0 nova_compute[189268]: 2025-11-22 08:53:14.583 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:17 compute-0 nova_compute[189268]: 2025-11-22 08:53:17.464 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:19 compute-0 nova_compute[189268]: 2025-11-22 08:53:19.586 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:20 compute-0 nova_compute[189268]: 2025-11-22 08:53:20.457 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801585.455493, 11f1996b-9b7f-4973-bd95-263ee88f2a2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:53:20 compute-0 nova_compute[189268]: 2025-11-22 08:53:20.458 189273 INFO nova.compute.manager [-] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] VM Stopped (Lifecycle Event)
Nov 22 08:53:20 compute-0 nova_compute[189268]: 2025-11-22 08:53:20.479 189273 DEBUG nova.compute.manager [None req-f81af380-c9c8-492a-9dd8-11355344c911 - - - - - -] [instance: 11f1996b-9b7f-4973-bd95-263ee88f2a2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.219 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.219 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.268 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.374 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.374 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.387 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.388 189273 INFO nova.compute.claims [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.504 189273 DEBUG nova.compute.provider_tree [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.525 189273 DEBUG nova.scheduler.client.report [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.554 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.555 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.609 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.609 189273 DEBUG nova.network.neutron [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.630 189273 INFO nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.647 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.738 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.739 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.740 189273 INFO nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Creating image(s)
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.740 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "/var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.741 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "/var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.742 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "/var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.742 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.742 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:21 compute-0 nova_compute[189268]: 2025-11-22 08:53:21.964 189273 DEBUG nova.policy [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:53:22 compute-0 nova_compute[189268]: 2025-11-22 08:53:22.467 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:23 compute-0 nova_compute[189268]: 2025-11-22 08:53:23.649 189273 DEBUG nova.network.neutron [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Successfully created port: ed7b62da-e420-4250-acdc-71cedcdde8ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:53:23 compute-0 nova_compute[189268]: 2025-11-22 08:53:23.970 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.035 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.part --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.036 189273 DEBUG nova.virt.images [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] 0f738201-0a54-4f17-a455-df9aa7963f79 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.038 189273 DEBUG nova.privsep.utils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.038 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.part /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 podman[254454]: 2025-11-22 08:53:24.126912533 +0000 UTC m=+0.079937775 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:53:24 compute-0 podman[254453]: 2025-11-22 08:53:24.152174766 +0000 UTC m=+0.101422627 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.283 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.part /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.converted" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.288 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.381 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.382 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.395 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.458 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.459 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.460 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.473 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.537 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.538 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f,backing_fmt=raw /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.559 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763801589.548869, 38817707-1f5a-4596-bfd2-b48048331de7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.561 189273 INFO nova.compute.manager [-] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] VM Stopped (Lifecycle Event)
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.592 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.596 189273 DEBUG nova.compute.manager [None req-790e0fef-6ce8-4417-b07c-059a1744385e - - - - - -] [instance: 38817707-1f5a-4596-bfd2-b48048331de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.598 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f,backing_fmt=raw /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk 1073741824" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.599 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.600 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.689 189273 DEBUG nova.network.neutron [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Successfully updated port: ed7b62da-e420-4250-acdc-71cedcdde8ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.694 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.695 189273 DEBUG nova.virt.disk.api [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Checking if we can resize image /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.696 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.720 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.720 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.721 189273 DEBUG nova.network.neutron [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.764 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.765 189273 DEBUG nova.virt.disk.api [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Cannot resize image /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.766 189273 DEBUG nova.objects.instance [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.825 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.825 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Ensure instance console log exists: /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.826 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.826 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.827 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.919 189273 DEBUG nova.compute.manager [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received event network-changed-ed7b62da-e420-4250-acdc-71cedcdde8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.920 189273 DEBUG nova.compute.manager [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Refreshing instance network info cache due to event network-changed-ed7b62da-e420-4250-acdc-71cedcdde8ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:53:24 compute-0 nova_compute[189268]: 2025-11-22 08:53:24.920 189273 DEBUG oslo_concurrency.lockutils [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.019 189273 DEBUG nova.network.neutron [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.849 189273 DEBUG nova.network.neutron [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.868 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.869 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Instance network_info: |[{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.870 189273 DEBUG oslo_concurrency.lockutils [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.871 189273 DEBUG nova.network.neutron [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Refreshing network info cache for port ed7b62da-e420-4250-acdc-71cedcdde8ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.874 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Start _get_guest_xml network_info=[{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=0f738201-0a54-4f17-a455-df9aa7963f79,min_disk=0,min_ram=0,name='tempest-scenario-img--1939725698',owner='6872b219a7f441adb7db6dc2b4e66fd7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:53:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': '0f738201-0a54-4f17-a455-df9aa7963f79'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.888 189273 WARNING nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.900 189273 DEBUG nova.virt.libvirt.host [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.901 189273 DEBUG nova.virt.libvirt.host [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.908 189273 DEBUG nova.virt.libvirt.host [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.909 189273 DEBUG nova.virt.libvirt.host [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.910 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.910 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=0f738201-0a54-4f17-a455-df9aa7963f79,min_disk=0,min_ram=0,name='tempest-scenario-img--1939725698',owner='6872b219a7f441adb7db6dc2b4e66fd7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:53:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.911 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.912 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.912 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.913 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.913 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.914 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.915 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.915 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.916 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.916 189273 DEBUG nova.virt.hardware [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.922 189273 DEBUG nova.virt.libvirt.vif [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:53:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn',id=15,image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e65dbf71-31dd-495a-8544-26d84c5284b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6872b219a7f441adb7db6dc2b4e66fd7',ramdisk_id='',reservation_id='r-eyix9rv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1457752866',owner_user_name='tempest-PrometheusGabbiTest-1457752866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:53:21Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='37215e9bc58040aeb55ccd7e534b2a8c',uuid=4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.922 189273 DEBUG nova.network.os_vif_util [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converting VIF {"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.924 189273 DEBUG nova.network.os_vif_util [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a4:4f,bridge_name='br-int',has_traffic_filtering=True,id=ed7b62da-e420-4250-acdc-71cedcdde8ed,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped7b62da-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.926 189273 DEBUG nova.objects.instance [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.946 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <uuid>4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5</uuid>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <name>instance-0000000f</name>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <nova:name>te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn</nova:name>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:53:25</nova:creationTime>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:user uuid="37215e9bc58040aeb55ccd7e534b2a8c">tempest-PrometheusGabbiTest-1457752866-project-member</nova:user>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:project uuid="6872b219a7f441adb7db6dc2b4e66fd7">tempest-PrometheusGabbiTest-1457752866</nova:project>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="0f738201-0a54-4f17-a455-df9aa7963f79"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         <nova:port uuid="ed7b62da-e420-4250-acdc-71cedcdde8ed">
Nov 22 08:53:25 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.3.45" ipVersion="4"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <system>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <entry name="serial">4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5</entry>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <entry name="uuid">4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5</entry>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </system>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <os>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   </os>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <features>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   </features>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.config"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:84:a4:4f"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <target dev="taped7b62da-e4"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/console.log" append="off"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <video>
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </video>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:53:25 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:53:25 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:53:25 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:53:25 compute-0 nova_compute[189268]: </domain>
Nov 22 08:53:25 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.947 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Preparing to wait for external event network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.947 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.948 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.948 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.948 189273 DEBUG nova.virt.libvirt.vif [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:53:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn',id=15,image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e65dbf71-31dd-495a-8544-26d84c5284b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6872b219a7f441adb7db6dc2b4e66fd7',ramdisk_id='',reservation_id='r-eyix9rv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1457752866',owner_user_name='tempest-PrometheusGabbiTest-1457752866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:53:21Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='37215e9bc58040aeb55ccd7e534b2a8c',uuid=4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.949 189273 DEBUG nova.network.os_vif_util [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converting VIF {"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.949 189273 DEBUG nova.network.os_vif_util [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a4:4f,bridge_name='br-int',has_traffic_filtering=True,id=ed7b62da-e420-4250-acdc-71cedcdde8ed,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped7b62da-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.949 189273 DEBUG os_vif [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a4:4f,bridge_name='br-int',has_traffic_filtering=True,id=ed7b62da-e420-4250-acdc-71cedcdde8ed,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped7b62da-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.950 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.950 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.951 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.954 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.954 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped7b62da-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.954 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped7b62da-e4, col_values=(('external_ids', {'iface-id': 'ed7b62da-e420-4250-acdc-71cedcdde8ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:a4:4f', 'vm-uuid': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.956 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:25 compute-0 NetworkManager[56326]: <info>  [1763801605.9570] manager: (taped7b62da-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.959 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.965 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:25 compute-0 nova_compute[189268]: 2025-11-22 08:53:25.966 189273 INFO os_vif [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a4:4f,bridge_name='br-int',has_traffic_filtering=True,id=ed7b62da-e420-4250-acdc-71cedcdde8ed,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped7b62da-e4')
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.024 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.025 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.025 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] No VIF found with MAC fa:16:3e:84:a4:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.026 189273 INFO nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Using config drive
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.418 189273 INFO nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Creating config drive at /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.config
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.425 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo2czv50 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.554 189273 DEBUG oslo_concurrency.processutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo2czv50" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:53:26 compute-0 kernel: taped7b62da-e4: entered promiscuous mode
Nov 22 08:53:26 compute-0 NetworkManager[56326]: <info>  [1763801606.6374] manager: (taped7b62da-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.641 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 ovn_controller[97783]: 2025-11-22T08:53:26Z|00165|binding|INFO|Claiming lport ed7b62da-e420-4250-acdc-71cedcdde8ed for this chassis.
Nov 22 08:53:26 compute-0 ovn_controller[97783]: 2025-11-22T08:53:26Z|00166|binding|INFO|ed7b62da-e420-4250-acdc-71cedcdde8ed: Claiming fa:16:3e:84:a4:4f 10.100.3.45
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.650 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.655 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:a4:4f 10.100.3.45'], port_security=['fa:16:3e:84:a4:4f 10.100.3.45'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.45/16', 'neutron:device_id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c782ed20-231b-4e59-ad25-952e10372407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5efbe77c-7f0b-4c5a-a729-30b470e68fec, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=ed7b62da-e420-4250-acdc-71cedcdde8ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.656 106642 INFO neutron.agent.ovn.metadata.agent [-] Port ed7b62da-e420-4250-acdc-71cedcdde8ed in datapath 8ee541ea-f059-4138-b6cf-87ec84c3e9f8 bound to our chassis
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.657 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8ee541ea-f059-4138-b6cf-87ec84c3e9f8
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.671 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[2fdbcdf3-0615-4c37-9518-d6c05605ffa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.672 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8ee541ea-f1 in ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.674 239666 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8ee541ea-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.675 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[6b197de2-1090-4cda-acec-4d08b56f58f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.675 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[556c4598-014f-46dd-9abf-0718c1cfa97e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 systemd-machined[155703]: New machine qemu-16-instance-0000000f.
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.687 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[ac250b73-ba3d-47e8-bb9b-a44fe71b2312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.694 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 ovn_controller[97783]: 2025-11-22T08:53:26Z|00167|binding|INFO|Setting lport ed7b62da-e420-4250-acdc-71cedcdde8ed ovn-installed in OVS
Nov 22 08:53:26 compute-0 ovn_controller[97783]: 2025-11-22T08:53:26Z|00168|binding|INFO|Setting lport ed7b62da-e420-4250-acdc-71cedcdde8ed up in Southbound
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.698 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 systemd-udevd[254574]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.715 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[d72fe4f2-2632-4cc9-9dd1-69764037313f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 NetworkManager[56326]: <info>  [1763801606.7202] device (taped7b62da-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:53:26 compute-0 NetworkManager[56326]: <info>  [1763801606.7215] device (taped7b62da-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.745 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[8649cd0d-43f1-4a7e-ba56-9e934834ce10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 NetworkManager[56326]: <info>  [1763801606.7520] manager: (tap8ee541ea-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.751 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9da468-4dbe-4ef9-9e3c-fe5af987cf48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 podman[254535]: 2025-11-22 08:53:26.758851128 +0000 UTC m=+0.134510006 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30)
Nov 22 08:53:26 compute-0 podman[254536]: 2025-11-22 08:53:26.764153409 +0000 UTC m=+0.136506078 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.784 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[016537ad-72e5-42ca-b990-3b15617c559f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.790 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f3399-ecb3-46be-8910-2e284ee5da41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 NetworkManager[56326]: <info>  [1763801606.8181] device (tap8ee541ea-f0): carrier: link connected
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.823 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[bef849ef-93ca-4037-9d26-ac49166e0034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.842 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a460e4-c692-4b93-a6ed-e30ec167c79f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ee541ea-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:36:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672040, 'reachable_time': 33156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254619, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.858 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[3771d9ab-1839-40b0-80c1-9095746b4657]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:3630'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672040, 'tstamp': 672040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254620, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.874 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[d85980c7-ad63-456f-a3de-71015d529a96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ee541ea-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:36:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672040, 'reachable_time': 33156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254621, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.905 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[82ba8964-b5b1-4af0-a1da-2f422d8c6a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.955 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[ddedb1c3-f8bd-41c2-a76b-bc157e9aecb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.967 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ee541ea-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.968 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.968 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ee541ea-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 kernel: tap8ee541ea-f0: entered promiscuous mode
Nov 22 08:53:26 compute-0 NetworkManager[56326]: <info>  [1763801606.9710] manager: (tap8ee541ea-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.975 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.978 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8ee541ea-f0, col_values=(('external_ids', {'iface-id': 'cddd47d2-111c-4ed1-83df-9f3b0e628d26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.979 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 ovn_controller[97783]: 2025-11-22T08:53:26Z|00169|binding|INFO|Releasing lport cddd47d2-111c-4ed1-83df-9f3b0e628d26 from this chassis (sb_readonly=0)
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.980 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.982 106642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8ee541ea-f059-4138-b6cf-87ec84c3e9f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8ee541ea-f059-4138-b6cf-87ec84c3e9f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.983 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[03e9f7ac-9add-41f4-bd4b-4a2e1b04e156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.984 106642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: global
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     log         /dev/log local0 debug
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     log-tag     haproxy-metadata-proxy-8ee541ea-f059-4138-b6cf-87ec84c3e9f8
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     user        root
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     group       root
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     maxconn     1024
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     pidfile     /var/lib/neutron/external/pids/8ee541ea-f059-4138-b6cf-87ec84c3e9f8.pid.haproxy
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     daemon
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: defaults
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     log global
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     mode http
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     option httplog
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     option dontlognull
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     option http-server-close
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     option forwardfor
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     retries                 3
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     timeout http-request    30s
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     timeout connect         30s
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     timeout client          32s
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     timeout server          32s
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     timeout http-keep-alive 30s
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: listen listener
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     bind 169.254.169.254:80
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:     http-request add-header X-OVN-Network-ID 8ee541ea-f059-4138-b6cf-87ec84c3e9f8
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 08:53:26 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:26.984 106642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'env', 'PROCESS_TAG=haproxy-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8ee541ea-f059-4138-b6cf-87ec84c3e9f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 08:53:26 compute-0 nova_compute[189268]: 2025-11-22 08:53:26.993 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.023 189273 DEBUG nova.compute.manager [req-6056eaa5-316f-429c-b4d0-d8d7a706c16a req-3ab0def1-e1d6-468b-ada5-b82c31f88a3c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received event network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.023 189273 DEBUG oslo_concurrency.lockutils [req-6056eaa5-316f-429c-b4d0-d8d7a706c16a req-3ab0def1-e1d6-468b-ada5-b82c31f88a3c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.024 189273 DEBUG oslo_concurrency.lockutils [req-6056eaa5-316f-429c-b4d0-d8d7a706c16a req-3ab0def1-e1d6-468b-ada5-b82c31f88a3c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.024 189273 DEBUG oslo_concurrency.lockutils [req-6056eaa5-316f-429c-b4d0-d8d7a706c16a req-3ab0def1-e1d6-468b-ada5-b82c31f88a3c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.024 189273 DEBUG nova.compute.manager [req-6056eaa5-316f-429c-b4d0-d8d7a706c16a req-3ab0def1-e1d6-468b-ada5-b82c31f88a3c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Processing event network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.120 189273 DEBUG nova.network.neutron [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated VIF entry in instance network info cache for port ed7b62da-e420-4250-acdc-71cedcdde8ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.120 189273 DEBUG nova.network.neutron [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.136 189273 DEBUG oslo_concurrency.lockutils [req-b668e6f1-4865-4e85-b0a1-9a0c09f05f39 req-c57553c1-dfa5-4805-8ee4-0a2bdb3c01d7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.203 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.204 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801607.2021532, 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.204 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] VM Started (Lifecycle Event)
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.214 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.226 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.228 189273 INFO nova.virt.libvirt.driver [-] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Instance spawned successfully.
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.229 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.234 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.254 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.254 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.254 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.255 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.255 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.256 189273 DEBUG nova.virt.libvirt.driver [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.259 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.259 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801607.2025945, 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.259 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] VM Paused (Lifecycle Event)
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.291 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.297 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801607.2111256, 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.299 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] VM Resumed (Lifecycle Event)
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.321 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.325 189273 INFO nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Took 5.59 seconds to spawn the instance on the hypervisor.
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.325 189273 DEBUG nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.328 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.352 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.396 189273 INFO nova.compute.manager [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Took 6.06 seconds to build instance.
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.414 189273 DEBUG oslo_concurrency.lockutils [None req-abccdc5c-bd84-40c2-a2e0-270e8584e63a 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:27 compute-0 podman[254659]: 2025-11-22 08:53:27.462669592 +0000 UTC m=+0.080240903 container create 31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 08:53:27 compute-0 nova_compute[189268]: 2025-11-22 08:53:27.473 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:27 compute-0 systemd[1]: Started libpod-conmon-31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40.scope.
Nov 22 08:53:27 compute-0 podman[254659]: 2025-11-22 08:53:27.421695154 +0000 UTC m=+0.039266505 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 08:53:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 08:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0eaebbaa7eb018c7bb175994600596fd1157614da7078e2bc7bc5075d36020f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 08:53:27 compute-0 podman[254659]: 2025-11-22 08:53:27.569781898 +0000 UTC m=+0.187353239 container init 31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:53:27 compute-0 podman[254659]: 2025-11-22 08:53:27.57772372 +0000 UTC m=+0.195295041 container start 31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:53:27 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [NOTICE]   (254678) : New worker (254680) forked
Nov 22 08:53:27 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [NOTICE]   (254678) : Loading success.
Nov 22 08:53:29 compute-0 nova_compute[189268]: 2025-11-22 08:53:29.139 189273 DEBUG nova.compute.manager [req-e18cbe96-0f0e-4f48-beb0-90b7360bac6c req-e22081d6-6cee-4cec-b917-cf06e05646ea 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received event network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:53:29 compute-0 nova_compute[189268]: 2025-11-22 08:53:29.140 189273 DEBUG oslo_concurrency.lockutils [req-e18cbe96-0f0e-4f48-beb0-90b7360bac6c req-e22081d6-6cee-4cec-b917-cf06e05646ea 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:53:29 compute-0 nova_compute[189268]: 2025-11-22 08:53:29.141 189273 DEBUG oslo_concurrency.lockutils [req-e18cbe96-0f0e-4f48-beb0-90b7360bac6c req-e22081d6-6cee-4cec-b917-cf06e05646ea 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:53:29 compute-0 nova_compute[189268]: 2025-11-22 08:53:29.141 189273 DEBUG oslo_concurrency.lockutils [req-e18cbe96-0f0e-4f48-beb0-90b7360bac6c req-e22081d6-6cee-4cec-b917-cf06e05646ea 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:53:29 compute-0 nova_compute[189268]: 2025-11-22 08:53:29.141 189273 DEBUG nova.compute.manager [req-e18cbe96-0f0e-4f48-beb0-90b7360bac6c req-e22081d6-6cee-4cec-b917-cf06e05646ea 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] No waiting events found dispatching network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:53:29 compute-0 nova_compute[189268]: 2025-11-22 08:53:29.142 189273 WARNING nova.compute.manager [req-e18cbe96-0f0e-4f48-beb0-90b7360bac6c req-e22081d6-6cee-4cec-b917-cf06e05646ea 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received unexpected event network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed for instance with vm_state active and task_state None.
Nov 22 08:53:29 compute-0 podman[203476]: time="2025-11-22T08:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:53:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:53:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 22 08:53:30 compute-0 nova_compute[189268]: 2025-11-22 08:53:30.958 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:31 compute-0 openstack_network_exporter[205661]: ERROR   08:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:53:31 compute-0 openstack_network_exporter[205661]: ERROR   08:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:53:31 compute-0 openstack_network_exporter[205661]: ERROR   08:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:53:31 compute-0 openstack_network_exporter[205661]: ERROR   08:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:53:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:53:31 compute-0 openstack_network_exporter[205661]: ERROR   08:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:53:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:53:32 compute-0 podman[254690]: 2025-11-22 08:53:32.10974647 +0000 UTC m=+0.068859981 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=)
Nov 22 08:53:32 compute-0 nova_compute[189268]: 2025-11-22 08:53:32.471 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:34 compute-0 podman[254711]: 2025-11-22 08:53:34.115127984 +0000 UTC m=+0.066832218 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:53:35 compute-0 nova_compute[189268]: 2025-11-22 08:53:35.962 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:37 compute-0 nova_compute[189268]: 2025-11-22 08:53:37.035 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:37 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:37.036 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:53:37 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:37.038 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:53:37 compute-0 nova_compute[189268]: 2025-11-22 08:53:37.473 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:40 compute-0 nova_compute[189268]: 2025-11-22 08:53:40.966 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:42 compute-0 nova_compute[189268]: 2025-11-22 08:53:42.171 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:42 compute-0 nova_compute[189268]: 2025-11-22 08:53:42.172 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:53:42 compute-0 nova_compute[189268]: 2025-11-22 08:53:42.188 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 08:53:42 compute-0 nova_compute[189268]: 2025-11-22 08:53:42.189 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:42 compute-0 nova_compute[189268]: 2025-11-22 08:53:42.475 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:43 compute-0 nova_compute[189268]: 2025-11-22 08:53:43.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:44 compute-0 podman[254735]: 2025-11-22 08:53:44.759912341 +0000 UTC m=+0.078990620 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:53:44 compute-0 podman[254736]: 2025-11-22 08:53:44.760219089 +0000 UTC m=+0.070285868 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:53:44 compute-0 podman[254734]: 2025-11-22 08:53:44.769750733 +0000 UTC m=+0.092721435 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 22 08:53:45 compute-0 nova_compute[189268]: 2025-11-22 08:53:45.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:46 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:53:46.040 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:53:47 compute-0 nova_compute[189268]: 2025-11-22 08:53:47.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:47 compute-0 nova_compute[189268]: 2025-11-22 08:53:47.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:53:47 compute-0 nova_compute[189268]: 2025-11-22 08:53:47.479 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:48 compute-0 nova_compute[189268]: 2025-11-22 08:53:48.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:50 compute-0 nova_compute[189268]: 2025-11-22 08:53:50.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:50 compute-0 nova_compute[189268]: 2025-11-22 08:53:50.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:50 compute-0 nova_compute[189268]: 2025-11-22 08:53:50.976 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:52 compute-0 nova_compute[189268]: 2025-11-22 08:53:52.481 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:55 compute-0 nova_compute[189268]: 2025-11-22 08:53:55.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:55 compute-0 podman[254796]: 2025-11-22 08:53:55.120372344 +0000 UTC m=+0.072590540 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 08:53:55 compute-0 podman[254797]: 2025-11-22 08:53:55.131688944 +0000 UTC m=+0.077710805 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi)
Nov 22 08:53:55 compute-0 nova_compute[189268]: 2025-11-22 08:53:55.981 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:57 compute-0 podman[254835]: 2025-11-22 08:53:57.164524288 +0000 UTC m=+0.118280844 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, architecture=x86_64, io.buildah.version=1.29.0, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc.)
Nov 22 08:53:57 compute-0 podman[254836]: 2025-11-22 08:53:57.177626227 +0000 UTC m=+0.126830272 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:53:57 compute-0 nova_compute[189268]: 2025-11-22 08:53:57.483 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:53:59 compute-0 nova_compute[189268]: 2025-11-22 08:53:59.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:53:59 compute-0 podman[203476]: time="2025-11-22T08:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:53:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:53:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 22 08:54:00 compute-0 nova_compute[189268]: 2025-11-22 08:54:00.984 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:01 compute-0 openstack_network_exporter[205661]: ERROR   08:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:54:01 compute-0 openstack_network_exporter[205661]: ERROR   08:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:54:01 compute-0 openstack_network_exporter[205661]: ERROR   08:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:54:01 compute-0 openstack_network_exporter[205661]: ERROR   08:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:54:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:54:01 compute-0 openstack_network_exporter[205661]: ERROR   08:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:54:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:54:02 compute-0 nova_compute[189268]: 2025-11-22 08:54:02.485 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:02 compute-0 ovn_controller[97783]: 2025-11-22T08:54:02Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:a4:4f 10.100.3.45
Nov 22 08:54:02 compute-0 ovn_controller[97783]: 2025-11-22T08:54:02Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:a4:4f 10.100.3.45
Nov 22 08:54:03 compute-0 podman[254891]: 2025-11-22 08:54:03.148592407 +0000 UTC m=+0.352949001 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public)
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:05 compute-0 podman[254911]: 2025-11-22 08:54:05.10691841 +0000 UTC m=+0.066286612 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.127 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.205 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.284 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.285 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.367 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.746 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.749 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=72.39787292480469GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.750 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.750 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.839 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.840 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.841 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.886 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.898 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:54:05 compute-0 nova_compute[189268]: 2025-11-22 08:54:05.987 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:06 compute-0 nova_compute[189268]: 2025-11-22 08:54:06.032 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:54:06 compute-0 nova_compute[189268]: 2025-11-22 08:54:06.032 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:54:07 compute-0 nova_compute[189268]: 2025-11-22 08:54:07.489 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:08 compute-0 ovn_controller[97783]: 2025-11-22T08:54:08Z|00170|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Nov 22 08:54:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:54:09.996 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:54:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:54:09.997 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:54:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:54:09.997 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:54:10 compute-0 nova_compute[189268]: 2025-11-22 08:54:10.990 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:12 compute-0 nova_compute[189268]: 2025-11-22 08:54:12.495 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:15 compute-0 podman[254943]: 2025-11-22 08:54:15.115359778 +0000 UTC m=+0.065573074 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:54:15 compute-0 podman[254944]: 2025-11-22 08:54:15.119344334 +0000 UTC m=+0.064056724 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 08:54:15 compute-0 podman[254942]: 2025-11-22 08:54:15.122524178 +0000 UTC m=+0.075767205 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:54:15 compute-0 nova_compute[189268]: 2025-11-22 08:54:15.994 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:17 compute-0 nova_compute[189268]: 2025-11-22 08:54:17.497 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:20 compute-0 nova_compute[189268]: 2025-11-22 08:54:20.997 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.097 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.098 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.106 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.106 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.106 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.107 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.108 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:54:22 compute-0 nova_compute[189268]: 2025-11-22 08:54:22.500 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.593 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Sat, 22 Nov 2025 08:54:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b4ac0a85-01b2-4f1c-996d-d1c16f783977 x-openstack-request-id: req-b4ac0a85-01b2-4f1c-996d-d1c16f783977 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.594 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5", "name": "te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn", "status": "ACTIVE", "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "user_id": "37215e9bc58040aeb55ccd7e534b2a8c", "metadata": {"metering.server_group": "e65dbf71-31dd-495a-8544-26d84c5284b3"}, "hostId": "44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521", "image": {"id": "0f738201-0a54-4f17-a455-df9aa7963f79", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/0f738201-0a54-4f17-a455-df9aa7963f79"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:53:20Z", "updated": "2025-11-22T08:53:27Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.45", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:84:a4:4f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-22T08:53:27.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.594 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 used request id req-b4ac0a85-01b2-4f1c-996d-d1c16f783977 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.595 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'name': 'te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.595 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.595 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.596 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.596 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:54:22.596108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.600 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 / taped7b62da-e4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.601 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.602 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.602 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.602 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.602 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.603 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:54:22.603234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.603 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.603 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.604 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.604 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.604 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.604 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.604 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.604 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.605 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.605 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.605 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.606 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:54:22.604915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.606 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.606 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.606 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.606 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:54:22.606680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.607 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.607 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.607 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.607 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.607 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.608 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:54:22.608062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.640 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/cpu volume: 51850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.640 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.641 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.641 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.641 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.641 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.641 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.642 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:54:22.641773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.642 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.642 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.642 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.643 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.643 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.643 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:54:22.643339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.661 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.662 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.662 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.663 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.663 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.663 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.663 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.663 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:54:22.663633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.706 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.707 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.708 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.708 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.709 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.709 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.709 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.709 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.710 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.710 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.711 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.712 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.712 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.712 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.713 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:54:22.709837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:54:22.713795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.713 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.714 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.715 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.715 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.715 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.715 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.716 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.716 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.716 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:54:22.716544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.717 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.718 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.718 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.718 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.719 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.719 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.719 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 1495963975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.720 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 112899247 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.721 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:54:22.719212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.721 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.722 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.722 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.722 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.723 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:54:22.722965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.723 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.724 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.724 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.724 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.725 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.725 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:54:22.725741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.725 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.726 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.726 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.727 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.728 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.728 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.728 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.728 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.729 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.729 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.730 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:54:22.728972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.731 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.731 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.731 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.732 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.732 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.732 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.733 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:54:22.732598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.733 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.734 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.734 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.734 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.734 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.735 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.735 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.735 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.735 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:54:22.735187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.736 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.736 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.736 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.737 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.737 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.737 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:54:22.737382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.737 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.738 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.738 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.738 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.738 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.738 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.739 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.739 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 64687542703 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:54:22.739075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.739 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.740 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.740 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.740 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.740 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.741 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.741 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.741 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.742 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.742 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.742 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.742 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.742 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.742 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.743 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.743 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:54:22.741177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.744 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.744 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:54:22.742784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.744 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.744 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:54:22.744517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.745 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.745 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.745 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.745 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.746 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.746 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.746 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:54:22.746145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.747 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.747 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.747 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.747 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.747 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.748 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.748 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.748 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:54:22.747986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.748 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.749 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.749 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.749 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.749 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.749 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.750 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.750 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.750 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.751 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:54:22.749767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.751 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.751 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.751 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.751 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/memory.usage volume: 43.34765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.752 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.752 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.753 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:54:22.751711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.753 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.758 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.758 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.758 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.759 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.760 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:54:22.761 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:54:26 compute-0 nova_compute[189268]: 2025-11-22 08:54:26.003 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:26 compute-0 podman[255001]: 2025-11-22 08:54:26.124327715 +0000 UTC m=+0.069730254 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:54:26 compute-0 podman[255000]: 2025-11-22 08:54:26.143021002 +0000 UTC m=+0.092803767 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 22 08:54:27 compute-0 nova_compute[189268]: 2025-11-22 08:54:27.502 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:28 compute-0 podman[255037]: 2025-11-22 08:54:28.118000618 +0000 UTC m=+0.073373312 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, release=1214.1726694543, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:54:28 compute-0 podman[255038]: 2025-11-22 08:54:28.174379476 +0000 UTC m=+0.127154991 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 08:54:29 compute-0 podman[203476]: time="2025-11-22T08:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:54:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:54:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 22 08:54:31 compute-0 nova_compute[189268]: 2025-11-22 08:54:31.006 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:31 compute-0 openstack_network_exporter[205661]: ERROR   08:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:54:31 compute-0 openstack_network_exporter[205661]: ERROR   08:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:54:31 compute-0 openstack_network_exporter[205661]: ERROR   08:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:54:31 compute-0 openstack_network_exporter[205661]: ERROR   08:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:54:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:54:31 compute-0 openstack_network_exporter[205661]: ERROR   08:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:54:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:54:32 compute-0 nova_compute[189268]: 2025-11-22 08:54:32.504 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:34 compute-0 podman[255081]: 2025-11-22 08:54:34.132680909 +0000 UTC m=+0.092316213 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible)
Nov 22 08:54:36 compute-0 nova_compute[189268]: 2025-11-22 08:54:36.010 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:36 compute-0 podman[255103]: 2025-11-22 08:54:36.121670618 +0000 UTC m=+0.078709113 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:54:37 compute-0 nova_compute[189268]: 2025-11-22 08:54:37.507 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:41 compute-0 nova_compute[189268]: 2025-11-22 08:54:41.017 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:42 compute-0 nova_compute[189268]: 2025-11-22 08:54:42.507 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:44 compute-0 nova_compute[189268]: 2025-11-22 08:54:44.033 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:44 compute-0 nova_compute[189268]: 2025-11-22 08:54:44.033 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:54:44 compute-0 nova_compute[189268]: 2025-11-22 08:54:44.034 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:54:44 compute-0 nova_compute[189268]: 2025-11-22 08:54:44.237 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:54:44 compute-0 nova_compute[189268]: 2025-11-22 08:54:44.238 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:54:44 compute-0 nova_compute[189268]: 2025-11-22 08:54:44.238 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:54:44 compute-0 nova_compute[189268]: 2025-11-22 08:54:44.239 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:54:45 compute-0 nova_compute[189268]: 2025-11-22 08:54:45.918 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:54:46 compute-0 nova_compute[189268]: 2025-11-22 08:54:46.014 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:54:46 compute-0 nova_compute[189268]: 2025-11-22 08:54:46.014 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:54:46 compute-0 nova_compute[189268]: 2025-11-22 08:54:46.015 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:46 compute-0 nova_compute[189268]: 2025-11-22 08:54:46.022 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:46 compute-0 nova_compute[189268]: 2025-11-22 08:54:46.082 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:46 compute-0 podman[255127]: 2025-11-22 08:54:46.109083827 +0000 UTC m=+0.062222585 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:54:46 compute-0 podman[255126]: 2025-11-22 08:54:46.124132697 +0000 UTC m=+0.080240933 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 22 08:54:46 compute-0 podman[255128]: 2025-11-22 08:54:46.129183831 +0000 UTC m=+0.073890705 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 08:54:47 compute-0 nova_compute[189268]: 2025-11-22 08:54:47.510 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:48 compute-0 nova_compute[189268]: 2025-11-22 08:54:48.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:49 compute-0 nova_compute[189268]: 2025-11-22 08:54:49.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:49 compute-0 nova_compute[189268]: 2025-11-22 08:54:49.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:54:50 compute-0 nova_compute[189268]: 2025-11-22 08:54:50.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:51 compute-0 nova_compute[189268]: 2025-11-22 08:54:51.024 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:51 compute-0 nova_compute[189268]: 2025-11-22 08:54:51.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:52 compute-0 nova_compute[189268]: 2025-11-22 08:54:52.511 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:56 compute-0 nova_compute[189268]: 2025-11-22 08:54:56.027 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:56 compute-0 nova_compute[189268]: 2025-11-22 08:54:56.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:54:57 compute-0 podman[255182]: 2025-11-22 08:54:57.12959357 +0000 UTC m=+0.080663914 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 08:54:57 compute-0 podman[255183]: 2025-11-22 08:54:57.139836802 +0000 UTC m=+0.082726419 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:54:57 compute-0 nova_compute[189268]: 2025-11-22 08:54:57.516 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:54:59 compute-0 podman[255219]: 2025-11-22 08:54:59.111166301 +0000 UTC m=+0.072342533 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 08:54:59 compute-0 podman[255220]: 2025-11-22 08:54:59.190009907 +0000 UTC m=+0.144267095 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 08:54:59 compute-0 podman[203476]: time="2025-11-22T08:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:54:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:54:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 08:55:01 compute-0 nova_compute[189268]: 2025-11-22 08:55:01.033 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:01 compute-0 openstack_network_exporter[205661]: ERROR   08:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:55:01 compute-0 openstack_network_exporter[205661]: ERROR   08:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:55:01 compute-0 openstack_network_exporter[205661]: ERROR   08:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:55:01 compute-0 openstack_network_exporter[205661]: ERROR   08:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:55:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:55:01 compute-0 openstack_network_exporter[205661]: ERROR   08:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:55:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:55:02 compute-0 nova_compute[189268]: 2025-11-22 08:55:02.689 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:05 compute-0 podman[255263]: 2025-11-22 08:55:05.108578775 +0000 UTC m=+0.065663846 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.125 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.190 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.268 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.270 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.331 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.655 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.657 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5199MB free_disk=72.39765930175781GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.657 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.658 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.724 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.725 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.726 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.808 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.825 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.827 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:55:05 compute-0 nova_compute[189268]: 2025-11-22 08:55:05.828 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:06 compute-0 nova_compute[189268]: 2025-11-22 08:55:06.036 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:07 compute-0 podman[255290]: 2025-11-22 08:55:07.137663039 +0000 UTC m=+0.091586955 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:55:07 compute-0 nova_compute[189268]: 2025-11-22 08:55:07.518 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:55:09.997 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:55:09.998 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:09 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:55:09.999 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:11 compute-0 nova_compute[189268]: 2025-11-22 08:55:11.040 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:12 compute-0 nova_compute[189268]: 2025-11-22 08:55:12.521 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.100 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.101 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.101 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.102 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.102 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.103 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.122 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.135 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.136 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Image id 0f738201-0a54-4f17-a455-df9aa7963f79 yields fingerprint 1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.136 189273 INFO nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] image 0f738201-0a54-4f17-a455-df9aa7963f79 at (/var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f): checking
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.137 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] image 0f738201-0a54-4f17-a455-df9aa7963f79 at (/var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.139 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.140 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.140 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.141 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.201 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.202 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 is backed by 1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.203 189273 WARNING nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.204 189273 WARNING nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.204 189273 WARNING nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.205 189273 INFO nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Active base files: /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.206 189273 INFO nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Removable base files: /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4 /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.207 189273 INFO nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/3743d624bf4f49380cb6de0480bbb028361f5cb4
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.208 189273 INFO nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/1d7f8e073419c499459afad86b152b7fec19c8da
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.209 189273 INFO nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/e3659e0d5dc4ae82934981faa7447edd23aca3ad
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.209 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.210 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.211 189273 DEBUG nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 22 08:55:14 compute-0 nova_compute[189268]: 2025-11-22 08:55:14.212 189273 INFO nova.virt.libvirt.imagecache [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 22 08:55:16 compute-0 nova_compute[189268]: 2025-11-22 08:55:16.044 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:17 compute-0 podman[255325]: 2025-11-22 08:55:17.13231296 +0000 UTC m=+0.067204707 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 08:55:17 compute-0 podman[255318]: 2025-11-22 08:55:17.137315953 +0000 UTC m=+0.090904127 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 08:55:17 compute-0 podman[255319]: 2025-11-22 08:55:17.149163867 +0000 UTC m=+0.094620235 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:55:17 compute-0 nova_compute[189268]: 2025-11-22 08:55:17.525 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:21 compute-0 nova_compute[189268]: 2025-11-22 08:55:21.051 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:22 compute-0 nova_compute[189268]: 2025-11-22 08:55:22.528 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:26 compute-0 nova_compute[189268]: 2025-11-22 08:55:26.056 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:27 compute-0 nova_compute[189268]: 2025-11-22 08:55:27.530 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:28 compute-0 podman[255381]: 2025-11-22 08:55:28.150005179 +0000 UTC m=+0.098782656 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:55:28 compute-0 podman[255382]: 2025-11-22 08:55:28.15530318 +0000 UTC m=+0.100568934 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 08:55:29 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 08:55:29 compute-0 podman[255420]: 2025-11-22 08:55:29.335978476 +0000 UTC m=+0.076410411 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler)
Nov 22 08:55:29 compute-0 podman[255421]: 2025-11-22 08:55:29.385011649 +0000 UTC m=+0.114306148 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 08:55:29 compute-0 podman[203476]: time="2025-11-22T08:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:55:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:55:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 22 08:55:31 compute-0 nova_compute[189268]: 2025-11-22 08:55:31.062 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:31 compute-0 openstack_network_exporter[205661]: ERROR   08:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:55:31 compute-0 openstack_network_exporter[205661]: ERROR   08:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:55:31 compute-0 openstack_network_exporter[205661]: ERROR   08:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:55:31 compute-0 openstack_network_exporter[205661]: ERROR   08:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:55:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:55:31 compute-0 openstack_network_exporter[205661]: ERROR   08:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:55:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:55:32 compute-0 nova_compute[189268]: 2025-11-22 08:55:32.533 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:36 compute-0 nova_compute[189268]: 2025-11-22 08:55:36.066 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:36 compute-0 podman[255461]: 2025-11-22 08:55:36.150611727 +0000 UTC m=+0.097309876 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=)
Nov 22 08:55:37 compute-0 nova_compute[189268]: 2025-11-22 08:55:37.537 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:38 compute-0 podman[255482]: 2025-11-22 08:55:38.109023723 +0000 UTC m=+0.058941297 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:55:41 compute-0 nova_compute[189268]: 2025-11-22 08:55:41.070 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:42 compute-0 nova_compute[189268]: 2025-11-22 08:55:42.541 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:43 compute-0 nova_compute[189268]: 2025-11-22 08:55:43.213 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:45 compute-0 nova_compute[189268]: 2025-11-22 08:55:45.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:45 compute-0 nova_compute[189268]: 2025-11-22 08:55:45.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:55:45 compute-0 nova_compute[189268]: 2025-11-22 08:55:45.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:55:45 compute-0 nova_compute[189268]: 2025-11-22 08:55:45.681 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:55:45 compute-0 nova_compute[189268]: 2025-11-22 08:55:45.682 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:55:45 compute-0 nova_compute[189268]: 2025-11-22 08:55:45.682 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:55:45 compute-0 nova_compute[189268]: 2025-11-22 08:55:45.683 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:55:46 compute-0 nova_compute[189268]: 2025-11-22 08:55:46.074 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:47 compute-0 nova_compute[189268]: 2025-11-22 08:55:47.546 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:48 compute-0 podman[255507]: 2025-11-22 08:55:48.131974547 +0000 UTC m=+0.067328840 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 08:55:48 compute-0 podman[255505]: 2025-11-22 08:55:48.159844778 +0000 UTC m=+0.103150793 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 08:55:48 compute-0 podman[255506]: 2025-11-22 08:55:48.161009268 +0000 UTC m=+0.100352488 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:55:48 compute-0 nova_compute[189268]: 2025-11-22 08:55:48.302 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:55:48 compute-0 nova_compute[189268]: 2025-11-22 08:55:48.315 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:55:48 compute-0 nova_compute[189268]: 2025-11-22 08:55:48.315 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:55:48 compute-0 nova_compute[189268]: 2025-11-22 08:55:48.315 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:49 compute-0 nova_compute[189268]: 2025-11-22 08:55:49.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:49 compute-0 nova_compute[189268]: 2025-11-22 08:55:49.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:49 compute-0 nova_compute[189268]: 2025-11-22 08:55:49.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:55:51 compute-0 nova_compute[189268]: 2025-11-22 08:55:51.078 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:51 compute-0 nova_compute[189268]: 2025-11-22 08:55:51.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:52 compute-0 nova_compute[189268]: 2025-11-22 08:55:52.550 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:53 compute-0 nova_compute[189268]: 2025-11-22 08:55:53.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:56 compute-0 nova_compute[189268]: 2025-11-22 08:55:56.081 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:57 compute-0 nova_compute[189268]: 2025-11-22 08:55:57.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:55:57 compute-0 nova_compute[189268]: 2025-11-22 08:55:57.552 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:55:59 compute-0 podman[255566]: 2025-11-22 08:55:59.107301189 +0000 UTC m=+0.067019041 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 22 08:55:59 compute-0 podman[255567]: 2025-11-22 08:55:59.117937632 +0000 UTC m=+0.073980957 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:55:59 compute-0 podman[203476]: time="2025-11-22T08:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:55:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:55:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 22 08:56:00 compute-0 nova_compute[189268]: 2025-11-22 08:56:00.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:00 compute-0 nova_compute[189268]: 2025-11-22 08:56:00.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 08:56:00 compute-0 nova_compute[189268]: 2025-11-22 08:56:00.112 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 08:56:00 compute-0 nova_compute[189268]: 2025-11-22 08:56:00.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:00 compute-0 podman[255604]: 2025-11-22 08:56:00.132805113 +0000 UTC m=+0.088433741 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 22 08:56:00 compute-0 podman[255605]: 2025-11-22 08:56:00.176287448 +0000 UTC m=+0.125409814 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:56:01 compute-0 nova_compute[189268]: 2025-11-22 08:56:01.085 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:01 compute-0 openstack_network_exporter[205661]: ERROR   08:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:56:01 compute-0 openstack_network_exporter[205661]: ERROR   08:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:56:01 compute-0 openstack_network_exporter[205661]: ERROR   08:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:56:01 compute-0 openstack_network_exporter[205661]: ERROR   08:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:56:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:56:01 compute-0 openstack_network_exporter[205661]: ERROR   08:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:56:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:56:02 compute-0 nova_compute[189268]: 2025-11-22 08:56:02.116 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:02 compute-0 nova_compute[189268]: 2025-11-22 08:56:02.555 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:03 compute-0 nova_compute[189268]: 2025-11-22 08:56:03.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:03 compute-0 nova_compute[189268]: 2025-11-22 08:56:03.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.089 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.116 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.147 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.148 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.148 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.148 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.209 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.274 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.275 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.337 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.656 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.657 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5194MB free_disk=72.39765548706055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.657 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.658 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.816 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.817 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.818 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.874 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.928 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.929 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.949 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 08:56:06 compute-0 nova_compute[189268]: 2025-11-22 08:56:06.969 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 08:56:07 compute-0 nova_compute[189268]: 2025-11-22 08:56:07.011 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:56:07 compute-0 nova_compute[189268]: 2025-11-22 08:56:07.023 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:56:07 compute-0 nova_compute[189268]: 2025-11-22 08:56:07.026 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:56:07 compute-0 nova_compute[189268]: 2025-11-22 08:56:07.026 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:56:07 compute-0 podman[255653]: 2025-11-22 08:56:07.1233883 +0000 UTC m=+0.069495978 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container)
Nov 22 08:56:07 compute-0 nova_compute[189268]: 2025-11-22 08:56:07.560 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:09 compute-0 podman[255673]: 2025-11-22 08:56:09.089607893 +0000 UTC m=+0.050705329 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:56:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:56:09.999 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:56:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:56:09.999 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:56:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:56:09.999 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:56:11 compute-0 nova_compute[189268]: 2025-11-22 08:56:11.093 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:12 compute-0 nova_compute[189268]: 2025-11-22 08:56:12.563 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:16 compute-0 nova_compute[189268]: 2025-11-22 08:56:16.096 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:17 compute-0 nova_compute[189268]: 2025-11-22 08:56:17.566 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:18 compute-0 sshd-session[255696]: Invalid user oracle from 80.94.92.164 port 39358
Nov 22 08:56:18 compute-0 podman[255698]: 2025-11-22 08:56:18.504789714 +0000 UTC m=+0.086618583 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 08:56:18 compute-0 podman[255699]: 2025-11-22 08:56:18.507382914 +0000 UTC m=+0.077605864 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:56:18 compute-0 podman[255700]: 2025-11-22 08:56:18.533784605 +0000 UTC m=+0.102103244 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 08:56:18 compute-0 sshd-session[255696]: Connection closed by invalid user oracle 80.94.92.164 port 39358 [preauth]
Nov 22 08:56:21 compute-0 nova_compute[189268]: 2025-11-22 08:56:21.099 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.097 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.098 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.098 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.104 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'name': 'te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.105 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.105 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.105 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.106 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:56:22.105675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.106 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{'network.incoming.bytes': [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.105 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.110 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.111 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.111 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.111 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.111 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.112 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.112 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.112 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.112 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:56:22.112112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.113 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.113 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.113 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.114 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.114 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.114 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:56:22.113996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.115 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.115 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.115 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.115 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.115 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:56:22.115646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.116 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.116 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.117 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.117 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.117 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.117 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:56:22.117428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.140 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/cpu volume: 171020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.141 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.142 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.142 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.142 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.142 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.143 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:56:22.143035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.143 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.144 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.145 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.145 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.145 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.145 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:56:22.145826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.161 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.162 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.162 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.162 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.162 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:56:22.163112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.197 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.197 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.198 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.198 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.198 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.198 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.198 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:56:22.198669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.199 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.199 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.199 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.200 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.200 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.200 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.200 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.200 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:56:22.200259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.201 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 1495963975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.202 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 112899247 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.202 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:56:22.201749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.202 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.203 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.203 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.203 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.203 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.203 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.203 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.203 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.204 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.204 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.204 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.204 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:56:22.203196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:56:22.204308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.205 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.205 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.205 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.205 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.205 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.205 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.206 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.206 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.206 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.206 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.206 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.207 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.207 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.207 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.207 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.208 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.208 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:56:22.205978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:56:22.207211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.208 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.209 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.209 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:56:22.209106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.209 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.210 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.210 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.210 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.210 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.210 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.210 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:56:22.210761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.211 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.211 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.211 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.212 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.212 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:56:22.212134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.212 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 64886120960 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.212 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.213 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.213 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.213 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.213 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.214 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:56:22.213770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.214 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.214 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.214 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.214 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:56:22.214928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.215 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.215 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.215 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.215 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.216 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.216 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.216 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.216 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:56:22.216090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.217 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:56:22.217170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.217 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.217 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.217 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.218 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.218 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.218 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.218 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.218 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.218 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:56:22.218290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.219 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/memory.usage volume: 43.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.220 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:56:22.219778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.220 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.220 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:56:22.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:56:22 compute-0 nova_compute[189268]: 2025-11-22 08:56:22.568 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:26 compute-0 nova_compute[189268]: 2025-11-22 08:56:26.102 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:27 compute-0 nova_compute[189268]: 2025-11-22 08:56:27.571 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:29 compute-0 nova_compute[189268]: 2025-11-22 08:56:29.263 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:29 compute-0 nova_compute[189268]: 2025-11-22 08:56:29.295 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 08:56:29 compute-0 nova_compute[189268]: 2025-11-22 08:56:29.295 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:56:29 compute-0 nova_compute[189268]: 2025-11-22 08:56:29.296 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:56:29 compute-0 nova_compute[189268]: 2025-11-22 08:56:29.320 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:56:29 compute-0 podman[203476]: time="2025-11-22T08:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:56:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:56:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 22 08:56:30 compute-0 podman[255758]: 2025-11-22 08:56:30.120302831 +0000 UTC m=+0.072120257 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 22 08:56:30 compute-0 podman[255759]: 2025-11-22 08:56:30.138766501 +0000 UTC m=+0.087232269 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 22 08:56:30 compute-0 podman[255793]: 2025-11-22 08:56:30.249077203 +0000 UTC m=+0.073573356 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container)
Nov 22 08:56:30 compute-0 podman[255814]: 2025-11-22 08:56:30.373035008 +0000 UTC m=+0.098158430 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:56:31 compute-0 nova_compute[189268]: 2025-11-22 08:56:31.105 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:31 compute-0 openstack_network_exporter[205661]: ERROR   08:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:56:31 compute-0 openstack_network_exporter[205661]: ERROR   08:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:56:31 compute-0 openstack_network_exporter[205661]: ERROR   08:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:56:31 compute-0 openstack_network_exporter[205661]: ERROR   08:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:56:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:56:31 compute-0 openstack_network_exporter[205661]: ERROR   08:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:56:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:56:32 compute-0 nova_compute[189268]: 2025-11-22 08:56:32.576 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:36 compute-0 nova_compute[189268]: 2025-11-22 08:56:36.109 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:37 compute-0 nova_compute[189268]: 2025-11-22 08:56:37.581 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:38 compute-0 podman[255840]: 2025-11-22 08:56:38.142310879 +0000 UTC m=+0.093720132 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, vcs-type=git)
Nov 22 08:56:40 compute-0 podman[255859]: 2025-11-22 08:56:40.105794909 +0000 UTC m=+0.063731165 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:56:41 compute-0 nova_compute[189268]: 2025-11-22 08:56:41.113 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:42 compute-0 nova_compute[189268]: 2025-11-22 08:56:42.583 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:43 compute-0 nova_compute[189268]: 2025-11-22 08:56:43.132 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:46 compute-0 nova_compute[189268]: 2025-11-22 08:56:46.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:46 compute-0 nova_compute[189268]: 2025-11-22 08:56:46.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:46 compute-0 nova_compute[189268]: 2025-11-22 08:56:46.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:56:46 compute-0 nova_compute[189268]: 2025-11-22 08:56:46.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:56:46 compute-0 nova_compute[189268]: 2025-11-22 08:56:46.117 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:47 compute-0 nova_compute[189268]: 2025-11-22 08:56:47.290 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:56:47 compute-0 nova_compute[189268]: 2025-11-22 08:56:47.291 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:56:47 compute-0 nova_compute[189268]: 2025-11-22 08:56:47.291 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:56:47 compute-0 nova_compute[189268]: 2025-11-22 08:56:47.292 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:56:47 compute-0 nova_compute[189268]: 2025-11-22 08:56:47.585 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:49 compute-0 podman[255882]: 2025-11-22 08:56:49.113378088 +0000 UTC m=+0.067594647 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:56:49 compute-0 podman[255881]: 2025-11-22 08:56:49.117689223 +0000 UTC m=+0.076705839 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:56:49 compute-0 podman[255883]: 2025-11-22 08:56:49.125499961 +0000 UTC m=+0.075370635 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 08:56:50 compute-0 nova_compute[189268]: 2025-11-22 08:56:50.351 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:56:50 compute-0 nova_compute[189268]: 2025-11-22 08:56:50.367 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:56:50 compute-0 nova_compute[189268]: 2025-11-22 08:56:50.368 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:56:50 compute-0 nova_compute[189268]: 2025-11-22 08:56:50.369 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:51 compute-0 nova_compute[189268]: 2025-11-22 08:56:51.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:51 compute-0 nova_compute[189268]: 2025-11-22 08:56:51.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:56:51 compute-0 nova_compute[189268]: 2025-11-22 08:56:51.120 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:52 compute-0 nova_compute[189268]: 2025-11-22 08:56:52.589 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:53 compute-0 nova_compute[189268]: 2025-11-22 08:56:53.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:54 compute-0 nova_compute[189268]: 2025-11-22 08:56:54.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:56 compute-0 nova_compute[189268]: 2025-11-22 08:56:56.124 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:57 compute-0 nova_compute[189268]: 2025-11-22 08:56:57.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:56:57 compute-0 nova_compute[189268]: 2025-11-22 08:56:57.593 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:56:59 compute-0 podman[203476]: time="2025-11-22T08:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:56:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:56:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 22 08:57:01 compute-0 nova_compute[189268]: 2025-11-22 08:57:01.128 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:01 compute-0 openstack_network_exporter[205661]: ERROR   08:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:57:01 compute-0 openstack_network_exporter[205661]: ERROR   08:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:57:01 compute-0 openstack_network_exporter[205661]: ERROR   08:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:57:01 compute-0 openstack_network_exporter[205661]: ERROR   08:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:57:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:57:01 compute-0 openstack_network_exporter[205661]: ERROR   08:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:57:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:57:01 compute-0 podman[255947]: 2025-11-22 08:57:01.576033857 +0000 UTC m=+0.072297292 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:57:01 compute-0 podman[255945]: 2025-11-22 08:57:01.590492582 +0000 UTC m=+0.079890604 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 08:57:01 compute-0 podman[255939]: 2025-11-22 08:57:01.593770499 +0000 UTC m=+0.107463937 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Nov 22 08:57:01 compute-0 podman[255940]: 2025-11-22 08:57:01.63710634 +0000 UTC m=+0.143323719 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 08:57:02 compute-0 nova_compute[189268]: 2025-11-22 08:57:02.596 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:06 compute-0 nova_compute[189268]: 2025-11-22 08:57:06.131 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.125 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.189 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.250 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.252 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.311 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.599 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.659 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.660 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5189MB free_disk=72.39765548706055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.661 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.661 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.727 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.728 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.728 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.775 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.790 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.793 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:57:07 compute-0 nova_compute[189268]: 2025-11-22 08:57:07.793 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:09 compute-0 podman[256024]: 2025-11-22 08:57:09.121749068 +0000 UTC m=+0.070867075 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Nov 22 08:57:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:57:10.001 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:57:10.001 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:57:10.002 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:11 compute-0 podman[256044]: 2025-11-22 08:57:11.116857108 +0000 UTC m=+0.073488904 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:57:11 compute-0 nova_compute[189268]: 2025-11-22 08:57:11.136 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:12 compute-0 nova_compute[189268]: 2025-11-22 08:57:12.603 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:16 compute-0 nova_compute[189268]: 2025-11-22 08:57:16.143 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:17 compute-0 nova_compute[189268]: 2025-11-22 08:57:17.605 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:20 compute-0 podman[256069]: 2025-11-22 08:57:20.123464042 +0000 UTC m=+0.067631628 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 08:57:20 compute-0 podman[256070]: 2025-11-22 08:57:20.155124503 +0000 UTC m=+0.095735285 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:57:20 compute-0 podman[256068]: 2025-11-22 08:57:20.160207428 +0000 UTC m=+0.103923123 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:57:21 compute-0 nova_compute[189268]: 2025-11-22 08:57:21.148 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:22 compute-0 nova_compute[189268]: 2025-11-22 08:57:22.609 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:26 compute-0 nova_compute[189268]: 2025-11-22 08:57:26.152 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:27 compute-0 nova_compute[189268]: 2025-11-22 08:57:27.614 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:29 compute-0 podman[203476]: time="2025-11-22T08:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:57:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:57:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 22 08:57:31 compute-0 nova_compute[189268]: 2025-11-22 08:57:31.156 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:31 compute-0 openstack_network_exporter[205661]: ERROR   08:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:57:31 compute-0 openstack_network_exporter[205661]: ERROR   08:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:57:31 compute-0 openstack_network_exporter[205661]: ERROR   08:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:57:31 compute-0 openstack_network_exporter[205661]: ERROR   08:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:57:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:57:31 compute-0 openstack_network_exporter[205661]: ERROR   08:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:57:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:57:32 compute-0 podman[256129]: 2025-11-22 08:57:32.129851555 +0000 UTC m=+0.076770091 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Nov 22 08:57:32 compute-0 podman[256127]: 2025-11-22 08:57:32.129869286 +0000 UTC m=+0.085537184 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 22 08:57:32 compute-0 podman[256130]: 2025-11-22 08:57:32.143234661 +0000 UTC m=+0.088685138 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:57:32 compute-0 podman[256128]: 2025-11-22 08:57:32.189745447 +0000 UTC m=+0.142172680 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:57:32 compute-0 nova_compute[189268]: 2025-11-22 08:57:32.618 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:36 compute-0 nova_compute[189268]: 2025-11-22 08:57:36.160 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:37 compute-0 nova_compute[189268]: 2025-11-22 08:57:37.622 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:40 compute-0 podman[256209]: 2025-11-22 08:57:40.135760465 +0000 UTC m=+0.082387231 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Nov 22 08:57:41 compute-0 nova_compute[189268]: 2025-11-22 08:57:41.164 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:42 compute-0 podman[256230]: 2025-11-22 08:57:42.09905168 +0000 UTC m=+0.060347565 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 08:57:42 compute-0 nova_compute[189268]: 2025-11-22 08:57:42.624 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:43 compute-0 nova_compute[189268]: 2025-11-22 08:57:43.794 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:46 compute-0 nova_compute[189268]: 2025-11-22 08:57:46.169 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:47 compute-0 nova_compute[189268]: 2025-11-22 08:57:47.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:47 compute-0 nova_compute[189268]: 2025-11-22 08:57:47.626 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:48 compute-0 nova_compute[189268]: 2025-11-22 08:57:48.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:48 compute-0 nova_compute[189268]: 2025-11-22 08:57:48.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:57:48 compute-0 nova_compute[189268]: 2025-11-22 08:57:48.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:57:48 compute-0 nova_compute[189268]: 2025-11-22 08:57:48.283 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:57:48 compute-0 nova_compute[189268]: 2025-11-22 08:57:48.283 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:57:48 compute-0 nova_compute[189268]: 2025-11-22 08:57:48.283 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:57:48 compute-0 nova_compute[189268]: 2025-11-22 08:57:48.284 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:57:50 compute-0 nova_compute[189268]: 2025-11-22 08:57:50.297 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:57:50 compute-0 nova_compute[189268]: 2025-11-22 08:57:50.314 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:57:50 compute-0 nova_compute[189268]: 2025-11-22 08:57:50.315 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:57:50 compute-0 nova_compute[189268]: 2025-11-22 08:57:50.316 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:51 compute-0 nova_compute[189268]: 2025-11-22 08:57:51.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:51 compute-0 nova_compute[189268]: 2025-11-22 08:57:51.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:57:51 compute-0 podman[256257]: 2025-11-22 08:57:51.111701414 +0000 UTC m=+0.062681136 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 08:57:51 compute-0 podman[256255]: 2025-11-22 08:57:51.117147239 +0000 UTC m=+0.073929176 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 08:57:51 compute-0 podman[256256]: 2025-11-22 08:57:51.124245398 +0000 UTC m=+0.074193493 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 08:57:51 compute-0 nova_compute[189268]: 2025-11-22 08:57:51.174 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:52 compute-0 nova_compute[189268]: 2025-11-22 08:57:52.627 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:55 compute-0 nova_compute[189268]: 2025-11-22 08:57:55.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:55 compute-0 nova_compute[189268]: 2025-11-22 08:57:55.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.178 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.787 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.789 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.804 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.898 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.900 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.912 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 08:57:56 compute-0 nova_compute[189268]: 2025-11-22 08:57:56.913 189273 INFO nova.compute.claims [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Claim successful on node compute-0.ctlplane.example.com
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.045 189273 DEBUG nova.compute.provider_tree [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.062 189273 DEBUG nova.scheduler.client.report [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.098 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.100 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.145 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.147 189273 DEBUG nova.network.neutron [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.175 189273 INFO nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.194 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.282 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.284 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.284 189273 INFO nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Creating image(s)
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.285 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "/var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.286 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "/var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.286 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "/var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.298 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.388 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.390 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.391 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.402 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.464 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.465 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f,backing_fmt=raw /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.504 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f,backing_fmt=raw /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.506 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.506 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.599 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1ba0dc7d43cd6a5267db9e9bdc00c210dfb8eb9f --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.601 189273 DEBUG nova.virt.disk.api [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Checking if we can resize image /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.602 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.630 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.698 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.700 189273 DEBUG nova.virt.disk.api [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Cannot resize image /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.701 189273 DEBUG nova.objects.instance [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 4809ca0d-4075-4d68-8ee7-5275c4253891 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.715 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.716 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Ensure instance console log exists: /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.717 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.718 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:57:57 compute-0 nova_compute[189268]: 2025-11-22 08:57:57.719 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:57:58 compute-0 nova_compute[189268]: 2025-11-22 08:57:58.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:57:58 compute-0 nova_compute[189268]: 2025-11-22 08:57:58.322 189273 DEBUG nova.policy [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 08:57:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:57:59.661 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:57:59 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:57:59.662 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 08:57:59 compute-0 nova_compute[189268]: 2025-11-22 08:57:59.662 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:57:59 compute-0 podman[203476]: time="2025-11-22T08:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:57:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:57:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Nov 22 08:57:59 compute-0 nova_compute[189268]: 2025-11-22 08:57:59.776 189273 DEBUG nova.network.neutron [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Successfully created port: 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 08:58:00 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:00.664 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.011 189273 DEBUG nova.network.neutron [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Successfully updated port: 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.016 189273 DEBUG nova.compute.manager [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received event network-changed-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.017 189273 DEBUG nova.compute.manager [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Refreshing instance network info cache due to event network-changed-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.018 189273 DEBUG oslo_concurrency.lockutils [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.018 189273 DEBUG oslo_concurrency.lockutils [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquired lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.019 189273 DEBUG nova.network.neutron [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Refreshing network info cache for port 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.030 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.183 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.309 189273 DEBUG nova.network.neutron [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:58:01 compute-0 openstack_network_exporter[205661]: ERROR   08:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:58:01 compute-0 openstack_network_exporter[205661]: ERROR   08:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:58:01 compute-0 openstack_network_exporter[205661]: ERROR   08:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:58:01 compute-0 openstack_network_exporter[205661]: ERROR   08:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:58:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:58:01 compute-0 openstack_network_exporter[205661]: ERROR   08:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:58:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.637 189273 DEBUG nova.network.neutron [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.652 189273 DEBUG oslo_concurrency.lockutils [req-4b35c9b3-c5ea-4344-83fb-bab20c724718 req-69515bfb-3755-4b01-a9b1-544ebb1ba06c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Releasing lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.653 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquired lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:58:01 compute-0 nova_compute[189268]: 2025-11-22 08:58:01.654 189273 DEBUG nova.network.neutron [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 08:58:02 compute-0 nova_compute[189268]: 2025-11-22 08:58:02.465 189273 DEBUG nova.network.neutron [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 08:58:02 compute-0 nova_compute[189268]: 2025-11-22 08:58:02.632 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:03 compute-0 podman[256329]: 2025-11-22 08:58:03.15388177 +0000 UTC m=+0.083463080 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 08:58:03 compute-0 podman[256327]: 2025-11-22 08:58:03.192614009 +0000 UTC m=+0.129542864 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 08:58:03 compute-0 podman[256326]: 2025-11-22 08:58:03.194117579 +0000 UTC m=+0.135307627 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, config_id=edpm, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, release-0.7.12=)
Nov 22 08:58:03 compute-0 podman[256328]: 2025-11-22 08:58:03.195772543 +0000 UTC m=+0.128258160 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 08:58:03 compute-0 nova_compute[189268]: 2025-11-22 08:58:03.751 189273 DEBUG nova.network.neutron [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updating instance_info_cache with network_info: [{"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.017 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Releasing lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.018 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Instance network_info: |[{"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.021 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Start _get_guest_xml network_info=[{"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=0f738201-0a54-4f17-a455-df9aa7963f79,min_disk=0,min_ram=0,name='tempest-scenario-img--1939725698',owner='6872b219a7f441adb7db6dc2b4e66fd7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:53:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'image_id': '0f738201-0a54-4f17-a455-df9aa7963f79'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.027 189273 WARNING nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.037 189273 DEBUG nova.virt.libvirt.host [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.038 189273 DEBUG nova.virt.libvirt.host [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.041 189273 DEBUG nova.virt.libvirt.host [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.042 189273 DEBUG nova.virt.libvirt.host [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.043 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.043 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T08:46:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='60cc47c3-347f-4964-bb52-9bef8d0548a9',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T08:53:08Z,direct_url=<?>,disk_format='qcow2',id=0f738201-0a54-4f17-a455-df9aa7963f79,min_disk=0,min_ram=0,name='tempest-scenario-img--1939725698',owner='6872b219a7f441adb7db6dc2b4e66fd7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T08:53:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.044 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.045 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.045 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.045 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.046 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.046 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.047 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.048 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.048 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.049 189273 DEBUG nova.virt.hardware [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.053 189273 DEBUG nova.virt.libvirt.vif [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p',id=16,image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e65dbf71-31dd-495a-8544-26d84c5284b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6872b219a7f441adb7db6dc2b4e66fd7',ramdisk_id='',reservation_id='r-1xmx0z8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1457752866',owner_user_name='tempest-PrometheusGabbiTest-1457752866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:57:57Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='37215e9bc58040aeb55ccd7e534b2a8c',uuid=4809ca0d-4075-4d68-8ee7-5275c4253891,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.054 189273 DEBUG nova.network.os_vif_util [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converting VIF {"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.055 189273 DEBUG nova.network.os_vif_util [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e6:af,bridge_name='br-int',has_traffic_filtering=True,id=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec3e8b1-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.056 189273 DEBUG nova.objects.instance [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4809ca0d-4075-4d68-8ee7-5275c4253891 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.070 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] End _get_guest_xml xml=<domain type="kvm">
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <uuid>4809ca0d-4075-4d68-8ee7-5275c4253891</uuid>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <name>instance-00000010</name>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <memory>131072</memory>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <vcpu>1</vcpu>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <metadata>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <nova:name>te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p</nova:name>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <nova:creationTime>2025-11-22 08:58:04</nova:creationTime>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <nova:flavor name="m1.nano">
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:memory>128</nova:memory>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:disk>1</nova:disk>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:swap>0</nova:swap>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:vcpus>1</nova:vcpus>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       </nova:flavor>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <nova:owner>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:user uuid="37215e9bc58040aeb55ccd7e534b2a8c">tempest-PrometheusGabbiTest-1457752866-project-member</nova:user>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:project uuid="6872b219a7f441adb7db6dc2b4e66fd7">tempest-PrometheusGabbiTest-1457752866</nova:project>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       </nova:owner>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <nova:root type="image" uuid="0f738201-0a54-4f17-a455-df9aa7963f79"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <nova:ports>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         <nova:port uuid="9ec3e8b1-78a3-47e8-81c4-f0747a3e1915">
Nov 22 08:58:04 compute-0 nova_compute[189268]:           <nova:ip type="fixed" address="10.100.3.103" ipVersion="4"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:         </nova:port>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       </nova:ports>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </nova:instance>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   </metadata>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <sysinfo type="smbios">
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <system>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <entry name="manufacturer">RDO</entry>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <entry name="product">OpenStack Compute</entry>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <entry name="serial">4809ca0d-4075-4d68-8ee7-5275c4253891</entry>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <entry name="uuid">4809ca0d-4075-4d68-8ee7-5275c4253891</entry>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <entry name="family">Virtual Machine</entry>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </system>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   </sysinfo>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <os>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <boot dev="hd"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <smbios mode="sysinfo"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   </os>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <features>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <acpi/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <apic/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <vmcoreinfo/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   </features>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <clock offset="utc">
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <timer name="hpet" present="no"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   </clock>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <cpu mode="host-model" match="exact">
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   </cpu>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   <devices>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <disk type="file" device="disk">
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <target dev="vda" bus="virtio"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <disk type="file" device="cdrom">
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <driver name="qemu" type="raw" cache="none"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <source file="/var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk.config"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <target dev="sda" bus="sata"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </disk>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <interface type="ethernet">
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <mac address="fa:16:3e:5e:e6:af"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <mtu size="1442"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <target dev="tap9ec3e8b1-78"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </interface>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <serial type="pty">
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <log file="/var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/console.log" append="off"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </serial>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <video>
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <model type="virtio"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </video>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <input type="tablet" bus="usb"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <rng model="virtio">
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <backend model="random">/dev/urandom</backend>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </rng>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <controller type="usb" index="0"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     <memballoon model="virtio">
Nov 22 08:58:04 compute-0 nova_compute[189268]:       <stats period="10"/>
Nov 22 08:58:04 compute-0 nova_compute[189268]:     </memballoon>
Nov 22 08:58:04 compute-0 nova_compute[189268]:   </devices>
Nov 22 08:58:04 compute-0 nova_compute[189268]: </domain>
Nov 22 08:58:04 compute-0 nova_compute[189268]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.079 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Preparing to wait for external event network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.080 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.081 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.081 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.082 189273 DEBUG nova.virt.libvirt.vif [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T08:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p',id=16,image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e65dbf71-31dd-495a-8544-26d84c5284b3'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6872b219a7f441adb7db6dc2b4e66fd7',ramdisk_id='',reservation_id='r-1xmx0z8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1457752866',owner_user_name='tempest-PrometheusGabbiTest-1457752866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T08:57:57Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='37215e9bc58040aeb55ccd7e534b2a8c',uuid=4809ca0d-4075-4d68-8ee7-5275c4253891,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.083 189273 DEBUG nova.network.os_vif_util [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converting VIF {"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.084 189273 DEBUG nova.network.os_vif_util [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e6:af,bridge_name='br-int',has_traffic_filtering=True,id=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec3e8b1-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.084 189273 DEBUG os_vif [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e6:af,bridge_name='br-int',has_traffic_filtering=True,id=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec3e8b1-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.085 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.086 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.087 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.090 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.091 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec3e8b1-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.091 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ec3e8b1-78, col_values=(('external_ids', {'iface-id': '9ec3e8b1-78a3-47e8-81c4-f0747a3e1915', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:e6:af', 'vm-uuid': '4809ca0d-4075-4d68-8ee7-5275c4253891'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.093 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.095 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 08:58:04 compute-0 NetworkManager[56326]: <info>  [1763801884.0963] manager: (tap9ec3e8b1-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.102 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.104 189273 INFO os_vif [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e6:af,bridge_name='br-int',has_traffic_filtering=True,id=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec3e8b1-78')
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.276 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.277 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.278 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] No VIF found with MAC fa:16:3e:5e:e6:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.279 189273 INFO nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Using config drive
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.584 189273 INFO nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Creating config drive at /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk.config
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.591 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2h4dbyhu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.723 189273 DEBUG oslo_concurrency.processutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2h4dbyhu" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:58:04 compute-0 kernel: tap9ec3e8b1-78: entered promiscuous mode
Nov 22 08:58:04 compute-0 NetworkManager[56326]: <info>  [1763801884.7841] manager: (tap9ec3e8b1-78): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Nov 22 08:58:04 compute-0 ovn_controller[97783]: 2025-11-22T08:58:04Z|00171|binding|INFO|Claiming lport 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 for this chassis.
Nov 22 08:58:04 compute-0 ovn_controller[97783]: 2025-11-22T08:58:04Z|00172|binding|INFO|9ec3e8b1-78a3-47e8-81c4-f0747a3e1915: Claiming fa:16:3e:5e:e6:af 10.100.3.103
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.786 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 ovn_controller[97783]: 2025-11-22T08:58:04Z|00173|binding|INFO|Setting lport 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 ovn-installed in OVS
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.801 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.805 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 ovn_controller[97783]: 2025-11-22T08:58:04Z|00174|binding|INFO|Setting lport 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 up in Southbound
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.806 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:e6:af 10.100.3.103'], port_security=['fa:16:3e:5e:e6:af 10.100.3.103'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.103/16', 'neutron:device_id': '4809ca0d-4075-4d68-8ee7-5275c4253891', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c782ed20-231b-4e59-ad25-952e10372407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5efbe77c-7f0b-4c5a-a729-30b470e68fec, chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.808 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 in datapath 8ee541ea-f059-4138-b6cf-87ec84c3e9f8 bound to our chassis
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.809 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8ee541ea-f059-4138-b6cf-87ec84c3e9f8
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.829 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[92689630-953b-4ac1-8776-cc446b50ccd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:58:04 compute-0 systemd-machined[155703]: New machine qemu-17-instance-00000010.
Nov 22 08:58:04 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000010.
Nov 22 08:58:04 compute-0 systemd-udevd[256428]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.866 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[8a51bbff-640e-4ac6-979e-1f0bce05e6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.870 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[d0daf33b-d74c-4b32-91e4-489bd686620a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:58:04 compute-0 NetworkManager[56326]: <info>  [1763801884.8786] device (tap9ec3e8b1-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 08:58:04 compute-0 NetworkManager[56326]: <info>  [1763801884.8868] device (tap9ec3e8b1-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.909 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd49ef5-6199-4a1a-a384-95f8954928c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.931 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[03c52d6f-b171-42fb-b46b-d858da6ee84a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ee541ea-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:36:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672040, 'reachable_time': 29352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256438, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.949 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[66bdd2fe-6dfc-4cdc-934f-d27999022031]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap8ee541ea-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672051, 'tstamp': 672051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256439, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8ee541ea-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672053, 'tstamp': 672053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256439, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.952 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ee541ea-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.954 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 nova_compute[189268]: 2025-11-22 08:58:04.955 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.956 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ee541ea-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.956 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.957 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8ee541ea-f0, col_values=(('external_ids', {'iface-id': 'cddd47d2-111c-4ed1-83df-9f3b0e628d26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 08:58:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:04.958 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.115 189273 DEBUG nova.compute.manager [req-2ac414f5-4e33-46ce-8989-82ae18b5da92 req-ab1bd89a-f8b5-4210-8693-9fae868e6513 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received event network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.116 189273 DEBUG oslo_concurrency.lockutils [req-2ac414f5-4e33-46ce-8989-82ae18b5da92 req-ab1bd89a-f8b5-4210-8693-9fae868e6513 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.116 189273 DEBUG oslo_concurrency.lockutils [req-2ac414f5-4e33-46ce-8989-82ae18b5da92 req-ab1bd89a-f8b5-4210-8693-9fae868e6513 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.117 189273 DEBUG oslo_concurrency.lockutils [req-2ac414f5-4e33-46ce-8989-82ae18b5da92 req-ab1bd89a-f8b5-4210-8693-9fae868e6513 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.117 189273 DEBUG nova.compute.manager [req-2ac414f5-4e33-46ce-8989-82ae18b5da92 req-ab1bd89a-f8b5-4210-8693-9fae868e6513 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Processing event network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 08:58:05 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 08:58:05 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.434 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.436 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801885.4342933, 4809ca0d-4075-4d68-8ee7-5275c4253891 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.437 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] VM Started (Lifecycle Event)
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.447 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.452 189273 INFO nova.virt.libvirt.driver [-] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Instance spawned successfully.
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.453 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.455 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.460 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.473 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.474 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801885.4344082, 4809ca0d-4075-4d68-8ee7-5275c4253891 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.475 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] VM Paused (Lifecycle Event)
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.479 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.479 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.480 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.481 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.481 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.482 189273 DEBUG nova.virt.libvirt.driver [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.487 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.491 189273 DEBUG nova.virt.driver [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] Emitting event <LifecycleEvent: 1763801885.4418726, 4809ca0d-4075-4d68-8ee7-5275c4253891 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.492 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] VM Resumed (Lifecycle Event)
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.509 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.514 189273 DEBUG nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.534 189273 INFO nova.compute.manager [None req-3421b583-6916-418e-a4a4-925c3652cc81 - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.544 189273 INFO nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Took 8.26 seconds to spawn the instance on the hypervisor.
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.544 189273 DEBUG nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.605 189273 INFO nova.compute.manager [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Took 8.75 seconds to build instance.
Nov 22 08:58:05 compute-0 nova_compute[189268]: 2025-11-22 08:58:05.619 189273 DEBUG oslo_concurrency.lockutils [None req-86c12b78-8333-44a4-b773-3feec755a802 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.208 189273 DEBUG nova.compute.manager [req-15065891-f664-4b0e-b619-2eda6b843c71 req-696da0f3-9c11-4564-ba01-6d70ca7a031c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received event network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.209 189273 DEBUG oslo_concurrency.lockutils [req-15065891-f664-4b0e-b619-2eda6b843c71 req-696da0f3-9c11-4564-ba01-6d70ca7a031c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.210 189273 DEBUG oslo_concurrency.lockutils [req-15065891-f664-4b0e-b619-2eda6b843c71 req-696da0f3-9c11-4564-ba01-6d70ca7a031c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.211 189273 DEBUG oslo_concurrency.lockutils [req-15065891-f664-4b0e-b619-2eda6b843c71 req-696da0f3-9c11-4564-ba01-6d70ca7a031c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.212 189273 DEBUG nova.compute.manager [req-15065891-f664-4b0e-b619-2eda6b843c71 req-696da0f3-9c11-4564-ba01-6d70ca7a031c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] No waiting events found dispatching network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.213 189273 WARNING nova.compute.manager [req-15065891-f664-4b0e-b619-2eda6b843c71 req-696da0f3-9c11-4564-ba01-6d70ca7a031c 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received unexpected event network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 for instance with vm_state active and task_state None.
Nov 22 08:58:07 compute-0 nova_compute[189268]: 2025-11-22 08:58:07.635 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.095 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.119 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.120 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.120 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.121 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.212 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.274 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.275 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.353 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.368 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.428 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.429 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.490 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.847 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.849 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5158MB free_disk=72.39687728881836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.849 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.850 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.917 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.918 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.918 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.919 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.969 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.981 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.998 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:58:09 compute-0 nova_compute[189268]: 2025-11-22 08:58:09.998 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:10.002 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:58:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:10.003 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:58:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:58:10.003 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:58:11 compute-0 podman[256481]: 2025-11-22 08:58:11.151588849 +0000 UTC m=+0.101267739 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Nov 22 08:58:12 compute-0 nova_compute[189268]: 2025-11-22 08:58:12.638 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:13 compute-0 podman[256499]: 2025-11-22 08:58:13.155766237 +0000 UTC m=+0.110033034 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 08:58:14 compute-0 nova_compute[189268]: 2025-11-22 08:58:14.104 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:17 compute-0 nova_compute[189268]: 2025-11-22 08:58:17.641 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:19 compute-0 nova_compute[189268]: 2025-11-22 08:58:19.110 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.098 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.099 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.099 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.108 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'name': 'te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.111 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4809ca0d-4075-4d68-8ee7-5275c4253891 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 22 08:58:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:22.112 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4809ca0d-4075-4d68-8ee7-5275c4253891 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}41de7311aa3eb0f3adb679afd5ea377bdc27c99a5c84bf2ba532fbbe80a7016c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 22 08:58:22 compute-0 podman[256526]: 2025-11-22 08:58:22.132925252 +0000 UTC m=+0.076511442 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 08:58:22 compute-0 podman[256524]: 2025-11-22 08:58:22.13842191 +0000 UTC m=+0.092817330 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 08:58:22 compute-0 podman[256525]: 2025-11-22 08:58:22.14618824 +0000 UTC m=+0.096560182 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 08:58:22 compute-0 nova_compute[189268]: 2025-11-22 08:58:22.645 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:24 compute-0 nova_compute[189268]: 2025-11-22 08:58:24.115 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.301 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Sat, 22 Nov 2025 08:58:22 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d96e7a12-879f-42b5-bcab-56a1619415df x-openstack-request-id: req-d96e7a12-879f-42b5-bcab-56a1619415df _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.301 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4809ca0d-4075-4d68-8ee7-5275c4253891", "name": "te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p", "status": "ACTIVE", "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "user_id": "37215e9bc58040aeb55ccd7e534b2a8c", "metadata": {"metering.server_group": "e65dbf71-31dd-495a-8544-26d84c5284b3"}, "hostId": "44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521", "image": {"id": "0f738201-0a54-4f17-a455-df9aa7963f79", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/0f738201-0a54-4f17-a455-df9aa7963f79"}]}, "flavor": {"id": "60cc47c3-347f-4964-bb52-9bef8d0548a9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/60cc47c3-347f-4964-bb52-9bef8d0548a9"}]}, "created": "2025-11-22T08:57:55Z", "updated": "2025-11-22T08:58:05Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.103", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5e:e6:af"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4809ca0d-4075-4d68-8ee7-5275c4253891"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4809ca0d-4075-4d68-8ee7-5275c4253891"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-22T08:58:05.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000010", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.301 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4809ca0d-4075-4d68-8ee7-5275c4253891 used request id req-d96e7a12-879f-42b5-bcab-56a1619415df request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.303 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4809ca0d-4075-4d68-8ee7-5275c4253891', 'name': 'te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.303 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.303 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.303 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.303 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T08:58:24.303873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.310 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes volume: 1436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.313 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4809ca0d-4075-4d68-8ee7-5275c4253891 / tap9ec3e8b1-78 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.314 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.314 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.314 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.314 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.314 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.314 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.315 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T08:58:24.315069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.315 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.315 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.316 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.316 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.316 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.316 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.316 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.316 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T08:58:24.316778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.316 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.317 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.317 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.318 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.318 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.318 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.318 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.318 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.318 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.318 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T08:58:24.318390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.319 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.319 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.319 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.319 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.320 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.320 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T08:58:24.320082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.352 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/cpu volume: 292870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.379 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/cpu volume: 18550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.394 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.394 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.395 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.395 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.395 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.395 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T08:58:24.395288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.400 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.401 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.401 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.401 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.401 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.402 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.402 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.402 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T08:58:24.402197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.420 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.421 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.433 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.434 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.434 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.434 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.434 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.435 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.435 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.435 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T08:58:24.435234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.471 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.471 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.522 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.522 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.523 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.523 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.523 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.523 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.523 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.524 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.524 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T08:58:24.523983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.524 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.524 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.525 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.525 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.525 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.525 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.525 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.525 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.526 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.526 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T08:58:24.526038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.526 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.526 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.527 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.527 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.527 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.527 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.527 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.527 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-22T08:58:24.527717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.528 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p>]
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.528 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.528 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.528 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.529 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.529 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.529 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 1495963975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T08:58:24.529148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.529 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 112899247 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.529 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 2190398842 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.530 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 3452552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.530 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.530 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.530 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.531 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.531 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.531 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.531 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.531 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.532 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.532 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.532 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T08:58:24.531180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.532 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.532 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.532 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.533 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T08:58:24.532795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.533 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.533 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.534 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.534 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.534 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.534 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.534 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.535 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.535 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.535 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T08:58:24.535078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.535 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.535 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.536 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.536 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.536 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.537 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.537 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.537 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.537 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.537 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.537 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T08:58:24.537290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.538 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.538 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.539 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.539 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.539 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.539 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.539 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.539 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.539 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.540 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.540 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.540 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.541 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.541 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.541 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T08:58:24.539854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.541 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.542 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.542 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.542 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.542 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.543 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T08:58:24.542162) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.543 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.543 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.543 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.543 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.543 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.544 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 64886120960 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T08:58:24.543883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.544 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.544 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.545 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.545 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.545 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.545 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.545 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.546 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.546 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T08:58:24.546126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.546 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.547 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.547 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.547 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.547 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.547 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.547 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.547 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T08:58:24.547674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.548 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.548 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.548 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.548 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.549 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.549 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.549 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.549 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T08:58:24.549182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.549 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.550 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.550 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.550 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.550 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.550 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.550 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.551 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.551 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.551 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.551 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.552 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.552 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T08:58:24.550421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.552 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.552 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T08:58:24.552402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.552 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.553 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.553 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.553 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.553 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.553 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.553 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.553 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-22T08:58:24.553790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p>]
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.554 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.555 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/memory.usage volume: 43.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.555 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.555 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 4809ca0d-4075-4d68-8ee7-5275c4253891: ceilometer.compute.pollsters.NoVolumeException
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T08:58:24.554845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.555 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:24 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 08:58:24.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 08:58:27 compute-0 nova_compute[189268]: 2025-11-22 08:58:27.648 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:29 compute-0 nova_compute[189268]: 2025-11-22 08:58:29.118 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:29 compute-0 podman[203476]: time="2025-11-22T08:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:58:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:58:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 22 08:58:31 compute-0 openstack_network_exporter[205661]: ERROR   08:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:58:31 compute-0 openstack_network_exporter[205661]: ERROR   08:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:58:31 compute-0 openstack_network_exporter[205661]: ERROR   08:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:58:31 compute-0 openstack_network_exporter[205661]: ERROR   08:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:58:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:58:31 compute-0 openstack_network_exporter[205661]: ERROR   08:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:58:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:58:32 compute-0 nova_compute[189268]: 2025-11-22 08:58:32.650 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:34 compute-0 nova_compute[189268]: 2025-11-22 08:58:34.122 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:34 compute-0 podman[256592]: 2025-11-22 08:58:34.154139914 +0000 UTC m=+0.079407970 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 08:58:34 compute-0 podman[256584]: 2025-11-22 08:58:34.168530011 +0000 UTC m=+0.118280746 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 22 08:58:34 compute-0 podman[256586]: 2025-11-22 08:58:34.19711168 +0000 UTC m=+0.125327186 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 22 08:58:34 compute-0 podman[256585]: 2025-11-22 08:58:34.25239141 +0000 UTC m=+0.184731477 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 08:58:34 compute-0 ovn_controller[97783]: 2025-11-22T08:58:34Z|00175|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 08:58:37 compute-0 nova_compute[189268]: 2025-11-22 08:58:37.652 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:39 compute-0 nova_compute[189268]: 2025-11-22 08:58:39.125 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:42 compute-0 podman[256662]: 2025-11-22 08:58:42.110910786 +0000 UTC m=+0.065271939 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 22 08:58:42 compute-0 nova_compute[189268]: 2025-11-22 08:58:42.658 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:44 compute-0 podman[256681]: 2025-11-22 08:58:44.117255214 +0000 UTC m=+0.073419419 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:58:44 compute-0 nova_compute[189268]: 2025-11-22 08:58:44.129 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:46 compute-0 nova_compute[189268]: 2025-11-22 08:58:45.999 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:47 compute-0 nova_compute[189268]: 2025-11-22 08:58:47.660 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:48 compute-0 nova_compute[189268]: 2025-11-22 08:58:48.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:49 compute-0 nova_compute[189268]: 2025-11-22 08:58:49.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:49 compute-0 nova_compute[189268]: 2025-11-22 08:58:49.136 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:50 compute-0 nova_compute[189268]: 2025-11-22 08:58:50.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:50 compute-0 nova_compute[189268]: 2025-11-22 08:58:50.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:58:50 compute-0 nova_compute[189268]: 2025-11-22 08:58:50.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 08:58:52 compute-0 nova_compute[189268]: 2025-11-22 08:58:52.304 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:58:52 compute-0 nova_compute[189268]: 2025-11-22 08:58:52.305 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:58:52 compute-0 nova_compute[189268]: 2025-11-22 08:58:52.305 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:58:52 compute-0 nova_compute[189268]: 2025-11-22 08:58:52.306 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 08:58:52 compute-0 ovn_controller[97783]: 2025-11-22T08:58:52Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:e6:af 10.100.3.103
Nov 22 08:58:52 compute-0 ovn_controller[97783]: 2025-11-22T08:58:52Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:e6:af 10.100.3.103
Nov 22 08:58:52 compute-0 nova_compute[189268]: 2025-11-22 08:58:52.661 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:53 compute-0 podman[256720]: 2025-11-22 08:58:53.114037196 +0000 UTC m=+0.058966449 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:58:53 compute-0 podman[256719]: 2025-11-22 08:58:53.114642463 +0000 UTC m=+0.072644508 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Nov 22 08:58:53 compute-0 podman[256721]: 2025-11-22 08:58:53.143225422 +0000 UTC m=+0.091269019 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 08:58:54 compute-0 nova_compute[189268]: 2025-11-22 08:58:54.139 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:55 compute-0 nova_compute[189268]: 2025-11-22 08:58:55.483 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:58:55 compute-0 nova_compute[189268]: 2025-11-22 08:58:55.497 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:58:55 compute-0 nova_compute[189268]: 2025-11-22 08:58:55.498 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:58:55 compute-0 nova_compute[189268]: 2025-11-22 08:58:55.499 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:55 compute-0 nova_compute[189268]: 2025-11-22 08:58:55.499 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:55 compute-0 nova_compute[189268]: 2025-11-22 08:58:55.500 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:58:57 compute-0 nova_compute[189268]: 2025-11-22 08:58:57.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:57 compute-0 nova_compute[189268]: 2025-11-22 08:58:57.664 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:59 compute-0 nova_compute[189268]: 2025-11-22 08:58:59.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:58:59 compute-0 nova_compute[189268]: 2025-11-22 08:58:59.143 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:58:59 compute-0 podman[203476]: time="2025-11-22T08:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:58:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:58:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Nov 22 08:59:01 compute-0 openstack_network_exporter[205661]: ERROR   08:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:59:01 compute-0 openstack_network_exporter[205661]: ERROR   08:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:59:01 compute-0 openstack_network_exporter[205661]: ERROR   08:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:59:01 compute-0 openstack_network_exporter[205661]: ERROR   08:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:59:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:59:01 compute-0 openstack_network_exporter[205661]: ERROR   08:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:59:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:59:02 compute-0 nova_compute[189268]: 2025-11-22 08:59:02.666 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:04 compute-0 nova_compute[189268]: 2025-11-22 08:59:04.147 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:05 compute-0 podman[256780]: 2025-11-22 08:59:05.125763562 +0000 UTC m=+0.077525929 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, container_name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Nov 22 08:59:05 compute-0 podman[256782]: 2025-11-22 08:59:05.139597964 +0000 UTC m=+0.082292947 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 22 08:59:05 compute-0 podman[256788]: 2025-11-22 08:59:05.156589072 +0000 UTC m=+0.092688247 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 22 08:59:05 compute-0 podman[256781]: 2025-11-22 08:59:05.177040163 +0000 UTC m=+0.122462170 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:59:07 compute-0 nova_compute[189268]: 2025-11-22 08:59:07.668 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:09 compute-0 nova_compute[189268]: 2025-11-22 08:59:09.150 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:59:10.003 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:59:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:59:10.004 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:59:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 08:59:10.005 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.119 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.120 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.121 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.121 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.195 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.262 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.263 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.322 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.329 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.402 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.403 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.466 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.826 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.829 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5051MB free_disk=72.36903762817383GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.829 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.830 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.903 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.904 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.904 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.905 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.967 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.986 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.988 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 08:59:10 compute-0 nova_compute[189268]: 2025-11-22 08:59:10.988 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 08:59:12 compute-0 nova_compute[189268]: 2025-11-22 08:59:12.673 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:13 compute-0 podman[256871]: 2025-11-22 08:59:13.156584839 +0000 UTC m=+0.107456386 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 08:59:14 compute-0 nova_compute[189268]: 2025-11-22 08:59:14.154 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:14 compute-0 podman[256892]: 2025-11-22 08:59:14.742733069 +0000 UTC m=+0.066129753 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 08:59:17 compute-0 nova_compute[189268]: 2025-11-22 08:59:17.673 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:19 compute-0 nova_compute[189268]: 2025-11-22 08:59:19.157 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:22 compute-0 nova_compute[189268]: 2025-11-22 08:59:22.677 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:24 compute-0 podman[256917]: 2025-11-22 08:59:24.125678522 +0000 UTC m=+0.078453814 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 08:59:24 compute-0 podman[256918]: 2025-11-22 08:59:24.141759665 +0000 UTC m=+0.091550636 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 08:59:24 compute-0 podman[256916]: 2025-11-22 08:59:24.150653695 +0000 UTC m=+0.107695872 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 08:59:24 compute-0 nova_compute[189268]: 2025-11-22 08:59:24.160 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:27 compute-0 nova_compute[189268]: 2025-11-22 08:59:27.678 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:29 compute-0 nova_compute[189268]: 2025-11-22 08:59:29.163 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:29 compute-0 podman[203476]: time="2025-11-22T08:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:59:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:59:29 compute-0 podman[203476]: @ - - [22/Nov/2025:08:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 22 08:59:31 compute-0 openstack_network_exporter[205661]: ERROR   08:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:59:31 compute-0 openstack_network_exporter[205661]: ERROR   08:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 08:59:31 compute-0 openstack_network_exporter[205661]: ERROR   08:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 08:59:31 compute-0 openstack_network_exporter[205661]: ERROR   08:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 08:59:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:59:31 compute-0 openstack_network_exporter[205661]: ERROR   08:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 08:59:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 08:59:32 compute-0 nova_compute[189268]: 2025-11-22 08:59:32.683 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:34 compute-0 nova_compute[189268]: 2025-11-22 08:59:34.166 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:36 compute-0 podman[256980]: 2025-11-22 08:59:36.140127031 +0000 UTC m=+0.076777219 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 08:59:36 compute-0 podman[256975]: 2025-11-22 08:59:36.157291323 +0000 UTC m=+0.108256536 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 22 08:59:36 compute-0 podman[256977]: 2025-11-22 08:59:36.159827442 +0000 UTC m=+0.092708658 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 08:59:36 compute-0 podman[256976]: 2025-11-22 08:59:36.189809929 +0000 UTC m=+0.132404906 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 08:59:37 compute-0 nova_compute[189268]: 2025-11-22 08:59:37.685 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:39 compute-0 nova_compute[189268]: 2025-11-22 08:59:39.170 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:42 compute-0 nova_compute[189268]: 2025-11-22 08:59:42.688 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:44 compute-0 podman[257055]: 2025-11-22 08:59:44.131775193 +0000 UTC m=+0.092122792 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, config_id=edpm)
Nov 22 08:59:44 compute-0 nova_compute[189268]: 2025-11-22 08:59:44.173 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:45 compute-0 podman[257075]: 2025-11-22 08:59:45.101959384 +0000 UTC m=+0.059872484 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 08:59:46 compute-0 nova_compute[189268]: 2025-11-22 08:59:46.989 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:47 compute-0 nova_compute[189268]: 2025-11-22 08:59:47.690 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:48 compute-0 nova_compute[189268]: 2025-11-22 08:59:48.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:49 compute-0 nova_compute[189268]: 2025-11-22 08:59:49.178 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:50 compute-0 nova_compute[189268]: 2025-11-22 08:59:50.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:50 compute-0 nova_compute[189268]: 2025-11-22 08:59:50.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 08:59:50 compute-0 nova_compute[189268]: 2025-11-22 08:59:50.313 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 08:59:50 compute-0 nova_compute[189268]: 2025-11-22 08:59:50.314 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 08:59:50 compute-0 nova_compute[189268]: 2025-11-22 08:59:50.314 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 08:59:52 compute-0 nova_compute[189268]: 2025-11-22 08:59:52.343 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updating instance_info_cache with network_info: [{"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 08:59:52 compute-0 nova_compute[189268]: 2025-11-22 08:59:52.573 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 08:59:52 compute-0 nova_compute[189268]: 2025-11-22 08:59:52.573 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 08:59:52 compute-0 nova_compute[189268]: 2025-11-22 08:59:52.574 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:52 compute-0 nova_compute[189268]: 2025-11-22 08:59:52.575 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:52 compute-0 nova_compute[189268]: 2025-11-22 08:59:52.575 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 08:59:52 compute-0 nova_compute[189268]: 2025-11-22 08:59:52.692 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:54 compute-0 nova_compute[189268]: 2025-11-22 08:59:54.182 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:55 compute-0 podman[257109]: 2025-11-22 08:59:55.120799464 +0000 UTC m=+0.062615938 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 22 08:59:55 compute-0 podman[257107]: 2025-11-22 08:59:55.128307705 +0000 UTC m=+0.078083313 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 08:59:55 compute-0 podman[257108]: 2025-11-22 08:59:55.148247303 +0000 UTC m=+0.091501976 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 08:59:56 compute-0 nova_compute[189268]: 2025-11-22 08:59:56.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:57 compute-0 nova_compute[189268]: 2025-11-22 08:59:57.694 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:59 compute-0 nova_compute[189268]: 2025-11-22 08:59:59.103 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 08:59:59 compute-0 nova_compute[189268]: 2025-11-22 08:59:59.186 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 08:59:59 compute-0 podman[203476]: time="2025-11-22T08:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 08:59:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 08:59:59 compute-0 podman[203476]: @ - - [22/Nov/2025:08:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 22 09:00:00 compute-0 nova_compute[189268]: 2025-11-22 09:00:00.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:01 compute-0 openstack_network_exporter[205661]: ERROR   09:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:00:01 compute-0 openstack_network_exporter[205661]: ERROR   09:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:00:01 compute-0 openstack_network_exporter[205661]: ERROR   09:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:00:01 compute-0 openstack_network_exporter[205661]: ERROR   09:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:00:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:00:01 compute-0 openstack_network_exporter[205661]: ERROR   09:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:00:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:00:02 compute-0 nova_compute[189268]: 2025-11-22 09:00:02.696 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:04 compute-0 nova_compute[189268]: 2025-11-22 09:00:04.189 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:07 compute-0 podman[257168]: 2025-11-22 09:00:07.144827231 +0000 UTC m=+0.100663322 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 22 09:00:07 compute-0 podman[257170]: 2025-11-22 09:00:07.153792272 +0000 UTC m=+0.098372910 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:00:07 compute-0 podman[257176]: 2025-11-22 09:00:07.165274391 +0000 UTC m=+0.104451414 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 09:00:07 compute-0 podman[257169]: 2025-11-22 09:00:07.169761093 +0000 UTC m=+0.119573162 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:00:07 compute-0 nova_compute[189268]: 2025-11-22 09:00:07.698 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:08 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 09:00:09 compute-0 nova_compute[189268]: 2025-11-22 09:00:09.193 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:00:10.004 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:00:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:00:10.005 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:00:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:00:10.006 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:00:10 compute-0 nova_compute[189268]: 2025-11-22 09:00:10.095 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.127 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.208 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.270 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.272 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.350 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.363 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.433 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.434 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.503 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.702 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.906 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.907 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5067MB free_disk=72.365966796875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.908 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.909 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.977 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.978 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.979 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:00:12 compute-0 nova_compute[189268]: 2025-11-22 09:00:12.979 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:00:13 compute-0 nova_compute[189268]: 2025-11-22 09:00:13.048 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:00:13 compute-0 nova_compute[189268]: 2025-11-22 09:00:13.064 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:00:13 compute-0 nova_compute[189268]: 2025-11-22 09:00:13.067 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:00:13 compute-0 nova_compute[189268]: 2025-11-22 09:00:13.068 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:00:14 compute-0 nova_compute[189268]: 2025-11-22 09:00:14.196 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:14 compute-0 podman[257262]: 2025-11-22 09:00:14.764869774 +0000 UTC m=+0.097100386 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9)
Nov 22 09:00:16 compute-0 podman[257283]: 2025-11-22 09:00:16.139041385 +0000 UTC m=+0.094616470 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 09:00:17 compute-0 nova_compute[189268]: 2025-11-22 09:00:17.709 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:19 compute-0 nova_compute[189268]: 2025-11-22 09:00:19.200 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.098 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.099 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c59d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.105 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'name': 'te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.108 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4809ca0d-4075-4d68-8ee7-5275c4253891', 'name': 'te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.108 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.108 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.108 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.108 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T09:00:22.108773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.112 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.116 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.116 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T09:00:22.117341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.117 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T09:00:22.118726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.119 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.119 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.119 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.119 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.119 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.119 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.119 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.120 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T09:00:22.119934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.120 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.120 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.120 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.120 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.120 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.121 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.121 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T09:00:22.121120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.142 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/cpu volume: 333350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.165 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/cpu volume: 123690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.166 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.166 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.166 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.166 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.166 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.166 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.166 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.167 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T09:00:22.166770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.167 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.167 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.167 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.167 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.167 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.168 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T09:00:22.168037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.182 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.182 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.197 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.198 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.198 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.198 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.198 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.198 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.199 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T09:00:22.199068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.240 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.241 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 30469120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T09:00:22.277590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.278 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.278 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.278 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.278 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T09:00:22.279364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 1863604470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.280 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 205964976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T09:00:22.280690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.281 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 2884659985 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.281 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 273690857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.281 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.282 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.282 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.282 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T09:00:22.282394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes.delta volume: 1886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.283 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 355 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.284 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T09:00:22.283759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.284 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 279 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.284 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.285 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T09:00:22.285631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.286 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 1089 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.286 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.286 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T09:00:22.287322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.287 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.288 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.288 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.288 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.288 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.288 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.288 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.289 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.289 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.289 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.289 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.289 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T09:00:22.289121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.290 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.291 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.291 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.291 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T09:00:22.290779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.291 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.292 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.292 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.292 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T09:00:22.292157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.292 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 65236545384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.292 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.292 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 90960245279 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.293 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.293 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.293 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.293 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.293 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.293 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.293 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.294 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.294 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.294 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T09:00:22.293950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.294 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.294 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.295 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.295 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T09:00:22.295031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.295 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.295 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.296 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.296 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.296 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.296 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.296 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.297 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.297 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.297 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T09:00:22.296531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.297 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.297 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.297 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.298 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.298 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.298 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.298 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.298 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.298 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.299 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T09:00:22.297887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.299 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.299 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.299 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T09:00:22.299301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.300 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/memory.usage volume: 42.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.301 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/memory.usage volume: 43.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.301 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.301 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.301 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:00:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:00:22.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T09:00:22.300873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:00:22 compute-0 nova_compute[189268]: 2025-11-22 09:00:22.714 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:24 compute-0 nova_compute[189268]: 2025-11-22 09:00:24.205 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:26 compute-0 podman[257307]: 2025-11-22 09:00:26.121095685 +0000 UTC m=+0.073547923 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:00:26 compute-0 podman[257308]: 2025-11-22 09:00:26.12577417 +0000 UTC m=+0.071287961 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 09:00:26 compute-0 podman[257313]: 2025-11-22 09:00:26.134670151 +0000 UTC m=+0.073788339 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:00:27 compute-0 nova_compute[189268]: 2025-11-22 09:00:27.716 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:29 compute-0 nova_compute[189268]: 2025-11-22 09:00:29.209 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:29 compute-0 podman[203476]: time="2025-11-22T09:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:00:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:00:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 09:00:31 compute-0 openstack_network_exporter[205661]: ERROR   09:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:00:31 compute-0 openstack_network_exporter[205661]: ERROR   09:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:00:31 compute-0 openstack_network_exporter[205661]: ERROR   09:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:00:31 compute-0 openstack_network_exporter[205661]: ERROR   09:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:00:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:00:31 compute-0 openstack_network_exporter[205661]: ERROR   09:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:00:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:00:32 compute-0 nova_compute[189268]: 2025-11-22 09:00:32.719 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:34 compute-0 nova_compute[189268]: 2025-11-22 09:00:34.214 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:37 compute-0 nova_compute[189268]: 2025-11-22 09:00:37.722 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:38 compute-0 podman[257369]: 2025-11-22 09:00:38.136059277 +0000 UTC m=+0.081511096 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Nov 22 09:00:38 compute-0 podman[257366]: 2025-11-22 09:00:38.157286719 +0000 UTC m=+0.114103994 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, vcs-type=git, version=9.4)
Nov 22 09:00:38 compute-0 podman[257368]: 2025-11-22 09:00:38.163666631 +0000 UTC m=+0.114065504 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.build-date=20251118)
Nov 22 09:00:38 compute-0 podman[257367]: 2025-11-22 09:00:38.194270545 +0000 UTC m=+0.148727337 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:00:39 compute-0 nova_compute[189268]: 2025-11-22 09:00:39.219 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:42 compute-0 nova_compute[189268]: 2025-11-22 09:00:42.722 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:44 compute-0 nova_compute[189268]: 2025-11-22 09:00:44.224 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:45 compute-0 podman[257445]: 2025-11-22 09:00:45.110523022 +0000 UTC m=+0.069668477 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Nov 22 09:00:47 compute-0 podman[257465]: 2025-11-22 09:00:47.106734217 +0000 UTC m=+0.057438758 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 09:00:47 compute-0 nova_compute[189268]: 2025-11-22 09:00:47.725 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:49 compute-0 nova_compute[189268]: 2025-11-22 09:00:49.064 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:49 compute-0 nova_compute[189268]: 2025-11-22 09:00:49.065 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:49 compute-0 nova_compute[189268]: 2025-11-22 09:00:49.228 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:51 compute-0 nova_compute[189268]: 2025-11-22 09:00:51.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.344 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.345 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.345 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.346 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:00:52 compute-0 nova_compute[189268]: 2025-11-22 09:00:52.727 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:53 compute-0 nova_compute[189268]: 2025-11-22 09:00:53.258 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:00:53 compute-0 nova_compute[189268]: 2025-11-22 09:00:53.271 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:00:53 compute-0 nova_compute[189268]: 2025-11-22 09:00:53.272 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:00:53 compute-0 nova_compute[189268]: 2025-11-22 09:00:53.272 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:53 compute-0 nova_compute[189268]: 2025-11-22 09:00:53.273 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:00:54 compute-0 nova_compute[189268]: 2025-11-22 09:00:54.232 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:57 compute-0 nova_compute[189268]: 2025-11-22 09:00:57.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:00:57 compute-0 podman[257489]: 2025-11-22 09:00:57.107631704 +0000 UTC m=+0.069148054 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:00:57 compute-0 podman[257490]: 2025-11-22 09:00:57.10973076 +0000 UTC m=+0.066963793 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:00:57 compute-0 podman[257491]: 2025-11-22 09:00:57.130318335 +0000 UTC m=+0.083257983 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Nov 22 09:00:57 compute-0 nova_compute[189268]: 2025-11-22 09:00:57.730 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:59 compute-0 nova_compute[189268]: 2025-11-22 09:00:59.235 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:00:59 compute-0 podman[203476]: time="2025-11-22T09:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:00:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:00:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 22 09:01:00 compute-0 nova_compute[189268]: 2025-11-22 09:01:00.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:01 compute-0 openstack_network_exporter[205661]: ERROR   09:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:01:01 compute-0 openstack_network_exporter[205661]: ERROR   09:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:01:01 compute-0 openstack_network_exporter[205661]: ERROR   09:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:01:01 compute-0 openstack_network_exporter[205661]: ERROR   09:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:01:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:01:01 compute-0 openstack_network_exporter[205661]: ERROR   09:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:01:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:01:01 compute-0 CROND[257547]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 09:01:01 compute-0 run-parts[257550]: (/etc/cron.hourly) starting 0anacron
Nov 22 09:01:01 compute-0 run-parts[257556]: (/etc/cron.hourly) finished 0anacron
Nov 22 09:01:01 compute-0 CROND[257546]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 09:01:02 compute-0 nova_compute[189268]: 2025-11-22 09:01:02.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:02 compute-0 nova_compute[189268]: 2025-11-22 09:01:02.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:02 compute-0 nova_compute[189268]: 2025-11-22 09:01:02.732 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:04 compute-0 nova_compute[189268]: 2025-11-22 09:01:04.239 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:07 compute-0 nova_compute[189268]: 2025-11-22 09:01:07.734 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:09 compute-0 nova_compute[189268]: 2025-11-22 09:01:09.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:09 compute-0 nova_compute[189268]: 2025-11-22 09:01:09.110 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:01:09 compute-0 podman[257560]: 2025-11-22 09:01:09.146295907 +0000 UTC m=+0.078323658 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 09:01:09 compute-0 podman[257559]: 2025-11-22 09:01:09.150939882 +0000 UTC m=+0.087113885 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:01:09 compute-0 podman[257557]: 2025-11-22 09:01:09.153860101 +0000 UTC m=+0.099167909 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, release=1214.1726694543)
Nov 22 09:01:09 compute-0 podman[257558]: 2025-11-22 09:01:09.178159725 +0000 UTC m=+0.115052547 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:01:09 compute-0 nova_compute[189268]: 2025-11-22 09:01:09.243 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:01:10.006 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:01:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:01:10.006 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:01:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:01:10.007 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:01:11 compute-0 nova_compute[189268]: 2025-11-22 09:01:11.113 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:11 compute-0 nova_compute[189268]: 2025-11-22 09:01:11.114 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:01:11 compute-0 nova_compute[189268]: 2025-11-22 09:01:11.132 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:01:12 compute-0 nova_compute[189268]: 2025-11-22 09:01:12.735 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.117 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.144 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.145 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.146 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.146 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.221 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.299 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.300 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.357 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.366 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.425 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.427 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.511 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.898 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.900 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=72.365966796875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.900 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:01:13 compute-0 nova_compute[189268]: 2025-11-22 09:01:13.901 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.034 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.035 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.035 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.036 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.087 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.149 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.150 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.166 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.187 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.241 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.246 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.252 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.253 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:01:14 compute-0 nova_compute[189268]: 2025-11-22 09:01:14.254 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:01:16 compute-0 podman[257645]: 2025-11-22 09:01:16.185897339 +0000 UTC m=+0.141741165 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Nov 22 09:01:17 compute-0 nova_compute[189268]: 2025-11-22 09:01:17.738 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:18 compute-0 podman[257665]: 2025-11-22 09:01:18.139242225 +0000 UTC m=+0.097197617 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 09:01:19 compute-0 nova_compute[189268]: 2025-11-22 09:01:19.250 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:22 compute-0 nova_compute[189268]: 2025-11-22 09:01:22.739 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:24 compute-0 nova_compute[189268]: 2025-11-22 09:01:24.253 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:27 compute-0 nova_compute[189268]: 2025-11-22 09:01:27.740 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:27 compute-0 sshd-session[257689]: Invalid user oracle from 80.94.92.164 port 41862
Nov 22 09:01:27 compute-0 podman[257692]: 2025-11-22 09:01:27.983797888 +0000 UTC m=+0.061627640 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:01:28 compute-0 podman[257691]: 2025-11-22 09:01:28.017960107 +0000 UTC m=+0.099205940 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:01:28 compute-0 podman[257693]: 2025-11-22 09:01:28.02289683 +0000 UTC m=+0.096417066 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 09:01:28 compute-0 sshd-session[257689]: Connection closed by invalid user oracle 80.94.92.164 port 41862 [preauth]
Nov 22 09:01:29 compute-0 nova_compute[189268]: 2025-11-22 09:01:29.256 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:29 compute-0 podman[203476]: time="2025-11-22T09:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:01:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:01:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 22 09:01:31 compute-0 openstack_network_exporter[205661]: ERROR   09:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:01:31 compute-0 openstack_network_exporter[205661]: ERROR   09:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:01:31 compute-0 openstack_network_exporter[205661]: ERROR   09:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:01:31 compute-0 openstack_network_exporter[205661]: ERROR   09:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:01:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:01:31 compute-0 openstack_network_exporter[205661]: ERROR   09:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:01:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:01:32 compute-0 nova_compute[189268]: 2025-11-22 09:01:32.743 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:34 compute-0 nova_compute[189268]: 2025-11-22 09:01:34.259 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:37 compute-0 nova_compute[189268]: 2025-11-22 09:01:37.747 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:39 compute-0 nova_compute[189268]: 2025-11-22 09:01:39.263 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:40 compute-0 podman[257757]: 2025-11-22 09:01:40.155071727 +0000 UTC m=+0.080321444 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 09:01:40 compute-0 podman[257752]: 2025-11-22 09:01:40.162207609 +0000 UTC m=+0.091821043 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:01:40 compute-0 podman[257750]: 2025-11-22 09:01:40.1715527 +0000 UTC m=+0.112914190 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc.)
Nov 22 09:01:40 compute-0 podman[257751]: 2025-11-22 09:01:40.193872011 +0000 UTC m=+0.131977593 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:01:42 compute-0 nova_compute[189268]: 2025-11-22 09:01:42.750 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:44 compute-0 nova_compute[189268]: 2025-11-22 09:01:44.267 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:47 compute-0 podman[257831]: 2025-11-22 09:01:47.115940109 +0000 UTC m=+0.069619344 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 09:01:47 compute-0 nova_compute[189268]: 2025-11-22 09:01:47.753 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:49 compute-0 podman[257851]: 2025-11-22 09:01:49.112963551 +0000 UTC m=+0.069602294 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:01:49 compute-0 nova_compute[189268]: 2025-11-22 09:01:49.231 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:49 compute-0 nova_compute[189268]: 2025-11-22 09:01:49.271 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:50 compute-0 nova_compute[189268]: 2025-11-22 09:01:50.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:51 compute-0 nova_compute[189268]: 2025-11-22 09:01:51.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:52 compute-0 nova_compute[189268]: 2025-11-22 09:01:52.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:52 compute-0 nova_compute[189268]: 2025-11-22 09:01:52.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:01:52 compute-0 nova_compute[189268]: 2025-11-22 09:01:52.755 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:54 compute-0 nova_compute[189268]: 2025-11-22 09:01:54.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:54 compute-0 nova_compute[189268]: 2025-11-22 09:01:54.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:01:54 compute-0 nova_compute[189268]: 2025-11-22 09:01:54.275 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:54 compute-0 nova_compute[189268]: 2025-11-22 09:01:54.359 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:01:54 compute-0 nova_compute[189268]: 2025-11-22 09:01:54.360 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:01:54 compute-0 nova_compute[189268]: 2025-11-22 09:01:54.360 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:01:55 compute-0 nova_compute[189268]: 2025-11-22 09:01:55.550 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updating instance_info_cache with network_info: [{"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:01:55 compute-0 nova_compute[189268]: 2025-11-22 09:01:55.564 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:01:55 compute-0 nova_compute[189268]: 2025-11-22 09:01:55.565 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:01:57 compute-0 nova_compute[189268]: 2025-11-22 09:01:57.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:01:57 compute-0 nova_compute[189268]: 2025-11-22 09:01:57.757 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:58 compute-0 podman[257873]: 2025-11-22 09:01:58.139351217 +0000 UTC m=+0.093458915 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 09:01:58 compute-0 podman[257874]: 2025-11-22 09:01:58.141051343 +0000 UTC m=+0.084744032 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 09:01:58 compute-0 podman[257875]: 2025-11-22 09:01:58.167858565 +0000 UTC m=+0.099592021 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:01:59 compute-0 nova_compute[189268]: 2025-11-22 09:01:59.278 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:01:59 compute-0 podman[203476]: time="2025-11-22T09:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:01:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:01:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 22 09:02:01 compute-0 openstack_network_exporter[205661]: ERROR   09:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:02:01 compute-0 openstack_network_exporter[205661]: ERROR   09:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:02:01 compute-0 openstack_network_exporter[205661]: ERROR   09:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:02:01 compute-0 openstack_network_exporter[205661]: ERROR   09:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:02:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:02:01 compute-0 openstack_network_exporter[205661]: ERROR   09:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:02:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:02:02 compute-0 nova_compute[189268]: 2025-11-22 09:02:02.102 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:02 compute-0 nova_compute[189268]: 2025-11-22 09:02:02.759 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:03 compute-0 nova_compute[189268]: 2025-11-22 09:02:03.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:04 compute-0 nova_compute[189268]: 2025-11-22 09:02:04.281 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:07 compute-0 nova_compute[189268]: 2025-11-22 09:02:07.761 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:09 compute-0 nova_compute[189268]: 2025-11-22 09:02:09.283 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:02:10.007 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:02:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:02:10.008 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:02:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:02:10.009 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:02:11 compute-0 podman[257931]: 2025-11-22 09:02:11.147722613 +0000 UTC m=+0.079231473 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:02:11 compute-0 podman[257930]: 2025-11-22 09:02:11.1483378 +0000 UTC m=+0.087782004 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:02:11 compute-0 podman[257928]: 2025-11-22 09:02:11.1557933 +0000 UTC m=+0.100658230 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 09:02:11 compute-0 podman[257929]: 2025-11-22 09:02:11.202335953 +0000 UTC m=+0.143866843 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 09:02:12 compute-0 nova_compute[189268]: 2025-11-22 09:02:12.764 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:14 compute-0 nova_compute[189268]: 2025-11-22 09:02:14.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:14 compute-0 nova_compute[189268]: 2025-11-22 09:02:14.288 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.126 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.127 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.127 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.206 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.263 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.264 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.346 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.354 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.417 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.418 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.480 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.835 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.837 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=72.365966796875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.838 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.838 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.907 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.907 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.907 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.908 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.960 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.976 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.980 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:02:15 compute-0 nova_compute[189268]: 2025-11-22 09:02:15.981 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:02:17 compute-0 nova_compute[189268]: 2025-11-22 09:02:17.765 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:18 compute-0 podman[258017]: 2025-11-22 09:02:18.111035511 +0000 UTC m=+0.065268447 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 09:02:19 compute-0 nova_compute[189268]: 2025-11-22 09:02:19.292 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:20 compute-0 podman[258037]: 2025-11-22 09:02:20.131849683 +0000 UTC m=+0.087490746 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.099 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.099 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.099 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.108 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.109 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.109 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.109 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83bec350>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.110 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'name': 'te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.113 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4809ca0d-4075-4d68-8ee7-5275c4253891', 'name': 'te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.114 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.114 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.114 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.114 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T09:02:22.114335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.118 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.122 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.122 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.122 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.122 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.122 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.123 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.123 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.123 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.123 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T09:02:22.123110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.124 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T09:02:22.124511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.125 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.125 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.125 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.125 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.125 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.125 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.126 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.126 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.126 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.126 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.126 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.126 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.127 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.127 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T09:02:22.125929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T09:02:22.127116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.146 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/cpu volume: 334660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.168 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/cpu volume: 243310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.168 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.169 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.169 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.169 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.169 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.169 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.169 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.169 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.170 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.170 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.170 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T09:02:22.169314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.170 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.170 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.170 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T09:02:22.170942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.183 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.183 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.195 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.195 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.196 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.196 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.196 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.196 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.196 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T09:02:22.196474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.228 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.229 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.263 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 30469120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.263 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.264 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.264 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.264 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.264 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.264 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.264 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.264 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.265 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.265 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.265 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.265 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.266 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.266 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.266 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.266 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.266 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.266 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T09:02:22.264655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.267 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.268 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.268 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 1863604470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.268 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 205964976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.268 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 2884659985 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T09:02:22.266705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T09:02:22.268137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.268 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 273690857 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.269 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.270 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.270 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.270 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.270 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.270 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T09:02:22.269660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.270 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.271 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 355 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T09:02:22.270939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.271 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.271 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 279 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.271 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.272 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.273 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 1089 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.273 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.273 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.273 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.273 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.274 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.274 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T09:02:22.272616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.274 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T09:02:22.274287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.274 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.274 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.275 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.275 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.275 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.275 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.275 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.275 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.276 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.276 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.277 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.278 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.278 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.278 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.278 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.278 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.278 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T09:02:22.276097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T09:02:22.277640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.279 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 65236545384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T09:02:22.278810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.279 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.279 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 90960245279 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.279 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.280 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.280 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.280 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T09:02:22.280421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.281 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.282 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.282 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T09:02:22.281531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.282 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.283 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.283 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.283 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.283 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.284 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.284 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.284 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T09:02:22.282729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.284 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.284 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.285 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T09:02:22.283677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.285 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T09:02:22.285104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.285 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.285 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/memory.usage volume: 42.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.286 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/memory.usage volume: 43.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.287 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.287 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T09:02:22.286638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.287 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:02:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:02:22 compute-0 nova_compute[189268]: 2025-11-22 09:02:22.770 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:24 compute-0 nova_compute[189268]: 2025-11-22 09:02:24.296 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:27 compute-0 nova_compute[189268]: 2025-11-22 09:02:27.770 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:29 compute-0 podman[258064]: 2025-11-22 09:02:29.135233461 +0000 UTC m=+0.085839980 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 09:02:29 compute-0 podman[258063]: 2025-11-22 09:02:29.140879083 +0000 UTC m=+0.081740350 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:02:29 compute-0 podman[258062]: 2025-11-22 09:02:29.144419749 +0000 UTC m=+0.095770159 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 09:02:29 compute-0 nova_compute[189268]: 2025-11-22 09:02:29.301 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:29 compute-0 podman[203476]: time="2025-11-22T09:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:02:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:02:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 22 09:02:31 compute-0 openstack_network_exporter[205661]: ERROR   09:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:02:31 compute-0 openstack_network_exporter[205661]: ERROR   09:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:02:31 compute-0 openstack_network_exporter[205661]: ERROR   09:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:02:31 compute-0 openstack_network_exporter[205661]: ERROR   09:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:02:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:02:31 compute-0 openstack_network_exporter[205661]: ERROR   09:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:02:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:02:32 compute-0 nova_compute[189268]: 2025-11-22 09:02:32.772 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:34 compute-0 nova_compute[189268]: 2025-11-22 09:02:34.303 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:37 compute-0 nova_compute[189268]: 2025-11-22 09:02:37.775 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:39 compute-0 nova_compute[189268]: 2025-11-22 09:02:39.305 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:42 compute-0 podman[258130]: 2025-11-22 09:02:42.156822582 +0000 UTC m=+0.080637170 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 09:02:42 compute-0 podman[258119]: 2025-11-22 09:02:42.159581896 +0000 UTC m=+0.104969335 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.29.0)
Nov 22 09:02:42 compute-0 podman[258126]: 2025-11-22 09:02:42.160309346 +0000 UTC m=+0.089660713 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 09:02:42 compute-0 podman[258120]: 2025-11-22 09:02:42.188942437 +0000 UTC m=+0.125437086 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:02:42 compute-0 nova_compute[189268]: 2025-11-22 09:02:42.778 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:44 compute-0 nova_compute[189268]: 2025-11-22 09:02:44.308 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:47 compute-0 nova_compute[189268]: 2025-11-22 09:02:47.779 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:49 compute-0 podman[258195]: 2025-11-22 09:02:49.144306991 +0000 UTC m=+0.097287339 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350)
Nov 22 09:02:49 compute-0 nova_compute[189268]: 2025-11-22 09:02:49.310 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:50 compute-0 nova_compute[189268]: 2025-11-22 09:02:50.978 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:51 compute-0 nova_compute[189268]: 2025-11-22 09:02:51.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:51 compute-0 nova_compute[189268]: 2025-11-22 09:02:51.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:51 compute-0 podman[258218]: 2025-11-22 09:02:51.109122036 +0000 UTC m=+0.061355252 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 09:02:52 compute-0 nova_compute[189268]: 2025-11-22 09:02:52.783 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:54 compute-0 nova_compute[189268]: 2025-11-22 09:02:54.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:54 compute-0 nova_compute[189268]: 2025-11-22 09:02:54.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:02:54 compute-0 nova_compute[189268]: 2025-11-22 09:02:54.315 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:56 compute-0 nova_compute[189268]: 2025-11-22 09:02:56.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:56 compute-0 nova_compute[189268]: 2025-11-22 09:02:56.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:02:56 compute-0 nova_compute[189268]: 2025-11-22 09:02:56.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:02:56 compute-0 nova_compute[189268]: 2025-11-22 09:02:56.379 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:02:56 compute-0 nova_compute[189268]: 2025-11-22 09:02:56.382 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:02:56 compute-0 nova_compute[189268]: 2025-11-22 09:02:56.383 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:02:56 compute-0 nova_compute[189268]: 2025-11-22 09:02:56.384 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:02:57 compute-0 nova_compute[189268]: 2025-11-22 09:02:57.325 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:02:57 compute-0 nova_compute[189268]: 2025-11-22 09:02:57.343 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:02:57 compute-0 nova_compute[189268]: 2025-11-22 09:02:57.344 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:02:57 compute-0 nova_compute[189268]: 2025-11-22 09:02:57.786 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:58 compute-0 nova_compute[189268]: 2025-11-22 09:02:58.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:02:59 compute-0 nova_compute[189268]: 2025-11-22 09:02:59.320 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:02:59 compute-0 podman[203476]: time="2025-11-22T09:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:02:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:02:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Nov 22 09:03:00 compute-0 podman[258243]: 2025-11-22 09:03:00.133418757 +0000 UTC m=+0.079424978 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 09:03:00 compute-0 podman[258244]: 2025-11-22 09:03:00.133724996 +0000 UTC m=+0.072320418 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 09:03:00 compute-0 podman[258242]: 2025-11-22 09:03:00.137093246 +0000 UTC m=+0.088196244 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:03:01 compute-0 openstack_network_exporter[205661]: ERROR   09:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:03:01 compute-0 openstack_network_exporter[205661]: ERROR   09:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:03:01 compute-0 openstack_network_exporter[205661]: ERROR   09:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:03:01 compute-0 openstack_network_exporter[205661]: ERROR   09:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:03:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:03:01 compute-0 openstack_network_exporter[205661]: ERROR   09:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:03:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:03:02 compute-0 nova_compute[189268]: 2025-11-22 09:03:02.788 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:04 compute-0 nova_compute[189268]: 2025-11-22 09:03:04.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:04 compute-0 nova_compute[189268]: 2025-11-22 09:03:04.323 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:05 compute-0 nova_compute[189268]: 2025-11-22 09:03:05.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:07 compute-0 nova_compute[189268]: 2025-11-22 09:03:07.790 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:09 compute-0 nova_compute[189268]: 2025-11-22 09:03:09.325 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:03:10.011 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:03:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:03:10.012 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:03:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:03:10.013 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:03:12 compute-0 nova_compute[189268]: 2025-11-22 09:03:12.792 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:13 compute-0 podman[258303]: 2025-11-22 09:03:13.14466513 +0000 UTC m=+0.089797078 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 09:03:13 compute-0 podman[258304]: 2025-11-22 09:03:13.167240927 +0000 UTC m=+0.110381991 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 09:03:13 compute-0 podman[258301]: 2025-11-22 09:03:13.1922272 +0000 UTC m=+0.134346757 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 09:03:13 compute-0 podman[258302]: 2025-11-22 09:03:13.233273714 +0000 UTC m=+0.167287852 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:03:14 compute-0 nova_compute[189268]: 2025-11-22 09:03:14.329 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.130 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.131 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.132 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.133 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.232 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.335 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.337 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.421 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.431 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.489 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.491 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.552 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.903 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.905 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.36598587036133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.906 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.907 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.984 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.985 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.985 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:03:16 compute-0 nova_compute[189268]: 2025-11-22 09:03:16.986 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:03:17 compute-0 nova_compute[189268]: 2025-11-22 09:03:17.046 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:03:17 compute-0 nova_compute[189268]: 2025-11-22 09:03:17.058 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:03:17 compute-0 nova_compute[189268]: 2025-11-22 09:03:17.060 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:03:17 compute-0 nova_compute[189268]: 2025-11-22 09:03:17.061 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:03:17 compute-0 nova_compute[189268]: 2025-11-22 09:03:17.794 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:19 compute-0 nova_compute[189268]: 2025-11-22 09:03:19.334 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:20 compute-0 podman[258395]: 2025-11-22 09:03:20.143792321 +0000 UTC m=+0.098074300 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public)
Nov 22 09:03:22 compute-0 podman[258415]: 2025-11-22 09:03:22.130889496 +0000 UTC m=+0.089831889 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 09:03:22 compute-0 nova_compute[189268]: 2025-11-22 09:03:22.797 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:24 compute-0 nova_compute[189268]: 2025-11-22 09:03:24.337 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:27 compute-0 nova_compute[189268]: 2025-11-22 09:03:27.799 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:29 compute-0 nova_compute[189268]: 2025-11-22 09:03:29.342 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:29 compute-0 podman[203476]: time="2025-11-22T09:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:03:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:03:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Nov 22 09:03:31 compute-0 podman[258437]: 2025-11-22 09:03:31.13299948 +0000 UTC m=+0.083678993 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 09:03:31 compute-0 podman[258438]: 2025-11-22 09:03:31.154823467 +0000 UTC m=+0.097580356 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 09:03:31 compute-0 podman[258439]: 2025-11-22 09:03:31.158239299 +0000 UTC m=+0.104697098 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:03:31 compute-0 openstack_network_exporter[205661]: ERROR   09:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:03:31 compute-0 openstack_network_exporter[205661]: ERROR   09:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:03:31 compute-0 openstack_network_exporter[205661]: ERROR   09:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:03:31 compute-0 openstack_network_exporter[205661]: ERROR   09:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:03:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:03:31 compute-0 openstack_network_exporter[205661]: ERROR   09:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:03:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:03:32 compute-0 nova_compute[189268]: 2025-11-22 09:03:32.803 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:34 compute-0 nova_compute[189268]: 2025-11-22 09:03:34.346 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:37 compute-0 nova_compute[189268]: 2025-11-22 09:03:37.806 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:39 compute-0 nova_compute[189268]: 2025-11-22 09:03:39.351 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:42 compute-0 nova_compute[189268]: 2025-11-22 09:03:42.806 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:44 compute-0 podman[258498]: 2025-11-22 09:03:44.15306455 +0000 UTC m=+0.097264279 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:03:44 compute-0 podman[258496]: 2025-11-22 09:03:44.167652483 +0000 UTC m=+0.121607554 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, version=9.4, architecture=x86_64)
Nov 22 09:03:44 compute-0 podman[258505]: 2025-11-22 09:03:44.176664426 +0000 UTC m=+0.112347525 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 09:03:44 compute-0 podman[258497]: 2025-11-22 09:03:44.213152258 +0000 UTC m=+0.158672402 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 22 09:03:44 compute-0 nova_compute[189268]: 2025-11-22 09:03:44.355 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:47 compute-0 nova_compute[189268]: 2025-11-22 09:03:47.809 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:49 compute-0 nova_compute[189268]: 2025-11-22 09:03:49.358 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:51 compute-0 nova_compute[189268]: 2025-11-22 09:03:51.056 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:51 compute-0 nova_compute[189268]: 2025-11-22 09:03:51.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:51 compute-0 nova_compute[189268]: 2025-11-22 09:03:51.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:51 compute-0 podman[258574]: 2025-11-22 09:03:51.145002607 +0000 UTC m=+0.092804408 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Nov 22 09:03:52 compute-0 nova_compute[189268]: 2025-11-22 09:03:52.811 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:53 compute-0 podman[258596]: 2025-11-22 09:03:53.112576037 +0000 UTC m=+0.063697395 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:03:54 compute-0 nova_compute[189268]: 2025-11-22 09:03:54.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:54 compute-0 nova_compute[189268]: 2025-11-22 09:03:54.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:03:54 compute-0 nova_compute[189268]: 2025-11-22 09:03:54.362 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:57 compute-0 nova_compute[189268]: 2025-11-22 09:03:57.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:03:57 compute-0 nova_compute[189268]: 2025-11-22 09:03:57.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:03:57 compute-0 nova_compute[189268]: 2025-11-22 09:03:57.431 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:03:57 compute-0 nova_compute[189268]: 2025-11-22 09:03:57.432 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:03:57 compute-0 nova_compute[189268]: 2025-11-22 09:03:57.433 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:03:57 compute-0 nova_compute[189268]: 2025-11-22 09:03:57.815 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:59 compute-0 nova_compute[189268]: 2025-11-22 09:03:59.365 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:03:59 compute-0 podman[203476]: time="2025-11-22T09:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:03:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:03:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 22 09:04:00 compute-0 nova_compute[189268]: 2025-11-22 09:04:00.032 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updating instance_info_cache with network_info: [{"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:04:00 compute-0 nova_compute[189268]: 2025-11-22 09:04:00.053 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:04:00 compute-0 nova_compute[189268]: 2025-11-22 09:04:00.054 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:04:00 compute-0 nova_compute[189268]: 2025-11-22 09:04:00.055 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:01 compute-0 openstack_network_exporter[205661]: ERROR   09:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:04:01 compute-0 openstack_network_exporter[205661]: ERROR   09:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:04:01 compute-0 openstack_network_exporter[205661]: ERROR   09:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:04:01 compute-0 openstack_network_exporter[205661]: ERROR   09:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:04:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:04:01 compute-0 openstack_network_exporter[205661]: ERROR   09:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:04:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:04:02 compute-0 podman[258620]: 2025-11-22 09:04:02.331155039 +0000 UTC m=+0.077220679 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 09:04:02 compute-0 podman[258619]: 2025-11-22 09:04:02.335621979 +0000 UTC m=+0.092947462 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 22 09:04:02 compute-0 podman[258621]: 2025-11-22 09:04:02.369973043 +0000 UTC m=+0.104367429 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:04:02 compute-0 nova_compute[189268]: 2025-11-22 09:04:02.818 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:04 compute-0 nova_compute[189268]: 2025-11-22 09:04:04.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:04 compute-0 nova_compute[189268]: 2025-11-22 09:04:04.370 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:05 compute-0 nova_compute[189268]: 2025-11-22 09:04:05.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:07 compute-0 nova_compute[189268]: 2025-11-22 09:04:07.820 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:09 compute-0 nova_compute[189268]: 2025-11-22 09:04:09.375 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:04:10.010 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:04:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:04:10.012 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:04:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:04:10.013 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:04:12 compute-0 nova_compute[189268]: 2025-11-22 09:04:12.822 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:14 compute-0 nova_compute[189268]: 2025-11-22 09:04:14.378 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:14 compute-0 podman[258683]: 2025-11-22 09:04:14.776099125 +0000 UTC m=+0.084511715 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 09:04:14 compute-0 podman[258680]: 2025-11-22 09:04:14.790535093 +0000 UTC m=+0.102153770 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, release=1214.1726694543, version=9.4)
Nov 22 09:04:14 compute-0 podman[258682]: 2025-11-22 09:04:14.807353166 +0000 UTC m=+0.114011309 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 22 09:04:14 compute-0 podman[258681]: 2025-11-22 09:04:14.819824601 +0000 UTC m=+0.129307430 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 09:04:17 compute-0 nova_compute[189268]: 2025-11-22 09:04:17.824 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.094 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.126 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.149 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.149 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.150 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.150 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.228 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.293 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.294 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.355 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.363 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.419 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.420 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.477 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.895 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.896 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4982MB free_disk=72.36598587036133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.897 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.897 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.979 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.979 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.980 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:04:18 compute-0 nova_compute[189268]: 2025-11-22 09:04:18.980 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:04:19 compute-0 nova_compute[189268]: 2025-11-22 09:04:19.062 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:04:19 compute-0 nova_compute[189268]: 2025-11-22 09:04:19.078 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:04:19 compute-0 nova_compute[189268]: 2025-11-22 09:04:19.079 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:04:19 compute-0 nova_compute[189268]: 2025-11-22 09:04:19.080 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:04:19 compute-0 nova_compute[189268]: 2025-11-22 09:04:19.384 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.099 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.099 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.100 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.106 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'name': 'te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.109 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4809ca0d-4075-4d68-8ee7-5275c4253891', 'name': 'te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.109 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.109 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.109 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.110 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T09:04:22.110046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.114 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.118 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.119 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.119 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.119 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.119 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.119 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.119 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.119 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.120 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.120 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.120 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.120 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.120 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.121 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.121 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.121 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T09:04:22.119860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.121 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.121 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.122 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.122 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.122 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T09:04:22.121332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.122 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.122 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.122 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.123 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.123 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.123 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.123 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.123 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.124 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T09:04:22.122762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T09:04:22.124213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.144 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/cpu volume: 336030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 podman[258773]: 2025-11-22 09:04:22.166075196 +0000 UTC m=+0.110821023 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41)
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.174 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/cpu volume: 335470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.175 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.175 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.175 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.175 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.175 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.175 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.176 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.176 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T09:04:22.175922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.176 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.176 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.176 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.177 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.177 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.177 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T09:04:22.177247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.190 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.190 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.207 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.207 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.207 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.208 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.208 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.208 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.208 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T09:04:22.208326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.254 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.254 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.295 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 31488512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.295 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.296 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.296 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.296 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.296 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.296 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.296 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.297 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.297 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.297 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.297 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.298 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.298 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.298 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.298 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.298 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T09:04:22.296896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.298 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.299 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T09:04:22.298925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.299 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.299 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.299 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 1863604470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.300 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 205964976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.301 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 2920726179 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T09:04:22.300675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.301 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 283496841 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.302 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.302 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.302 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.302 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.302 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.302 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.302 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.303 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.303 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.303 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.303 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T09:04:22.302732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.303 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.303 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.304 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.304 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 355 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.304 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.304 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.304 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T09:04:22.304009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.305 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.305 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.305 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.305 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.305 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.306 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.306 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T09:04:22.306001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.306 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.306 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.306 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.307 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.308 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.308 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T09:04:22.307653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.309 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.310 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 73039872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.310 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T09:04:22.309507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.310 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.310 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.311 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.311 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.311 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.311 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.311 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T09:04:22.311429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.311 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.312 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.312 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.312 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.312 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.312 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.312 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.312 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 65236545384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T09:04:22.312691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.313 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.313 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 90971252816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.313 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.314 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.314 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.314 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.314 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.314 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.314 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T09:04:22.314667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T09:04:22.315677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.316 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.316 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.316 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.316 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.316 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.316 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.316 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T09:04:22.316746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.317 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.318 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.318 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.318 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T09:04:22.317862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.318 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.318 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.319 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.319 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.319 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T09:04:22.319087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.319 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.319 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.319 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/memory.usage volume: 42.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T09:04:22.320486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.320 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/memory.usage volume: 42.42578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.321 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.324 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.324 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.324 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:04:22.324 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:04:22 compute-0 nova_compute[189268]: 2025-11-22 09:04:22.826 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:24 compute-0 podman[258795]: 2025-11-22 09:04:24.105009305 +0000 UTC m=+0.062468973 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 09:04:24 compute-0 nova_compute[189268]: 2025-11-22 09:04:24.389 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:27 compute-0 nova_compute[189268]: 2025-11-22 09:04:27.828 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:29 compute-0 nova_compute[189268]: 2025-11-22 09:04:29.393 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:29 compute-0 podman[203476]: time="2025-11-22T09:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:04:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:04:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 09:04:31 compute-0 openstack_network_exporter[205661]: ERROR   09:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:04:31 compute-0 openstack_network_exporter[205661]: ERROR   09:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:04:31 compute-0 openstack_network_exporter[205661]: ERROR   09:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:04:31 compute-0 openstack_network_exporter[205661]: ERROR   09:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:04:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:04:31 compute-0 openstack_network_exporter[205661]: ERROR   09:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:04:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:04:32 compute-0 nova_compute[189268]: 2025-11-22 09:04:32.830 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:32 compute-0 podman[258837]: 2025-11-22 09:04:32.93052942 +0000 UTC m=+0.062475842 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 09:04:32 compute-0 podman[258838]: 2025-11-22 09:04:32.93275057 +0000 UTC m=+0.060937591 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:04:32 compute-0 podman[258836]: 2025-11-22 09:04:32.935269768 +0000 UTC m=+0.070111319 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 22 09:04:34 compute-0 nova_compute[189268]: 2025-11-22 09:04:34.398 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:37 compute-0 nova_compute[189268]: 2025-11-22 09:04:37.832 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:39 compute-0 nova_compute[189268]: 2025-11-22 09:04:39.402 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:42 compute-0 nova_compute[189268]: 2025-11-22 09:04:42.835 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:44 compute-0 nova_compute[189268]: 2025-11-22 09:04:44.405 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:45 compute-0 podman[258894]: 2025-11-22 09:04:45.132997665 +0000 UTC m=+0.079657175 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Nov 22 09:04:45 compute-0 podman[258896]: 2025-11-22 09:04:45.137065485 +0000 UTC m=+0.071049173 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 09:04:45 compute-0 podman[258902]: 2025-11-22 09:04:45.140515458 +0000 UTC m=+0.074220358 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 09:04:45 compute-0 podman[258895]: 2025-11-22 09:04:45.167937106 +0000 UTC m=+0.109953420 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:04:47 compute-0 nova_compute[189268]: 2025-11-22 09:04:47.837 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:49 compute-0 nova_compute[189268]: 2025-11-22 09:04:49.409 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:51 compute-0 nova_compute[189268]: 2025-11-22 09:04:51.080 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:52 compute-0 nova_compute[189268]: 2025-11-22 09:04:52.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:52 compute-0 nova_compute[189268]: 2025-11-22 09:04:52.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:52 compute-0 nova_compute[189268]: 2025-11-22 09:04:52.839 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:53 compute-0 podman[258974]: 2025-11-22 09:04:53.141488977 +0000 UTC m=+0.087601638 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Nov 22 09:04:54 compute-0 nova_compute[189268]: 2025-11-22 09:04:54.414 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:55 compute-0 podman[258993]: 2025-11-22 09:04:55.136821862 +0000 UTC m=+0.093677972 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 22 09:04:56 compute-0 nova_compute[189268]: 2025-11-22 09:04:56.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:56 compute-0 nova_compute[189268]: 2025-11-22 09:04:56.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:04:57 compute-0 nova_compute[189268]: 2025-11-22 09:04:57.841 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:58 compute-0 nova_compute[189268]: 2025-11-22 09:04:58.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.419 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.447 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.448 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.448 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:04:59 compute-0 nova_compute[189268]: 2025-11-22 09:04:59.449 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:04:59 compute-0 podman[203476]: time="2025-11-22T09:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:04:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:04:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 22 09:05:00 compute-0 nova_compute[189268]: 2025-11-22 09:05:00.397 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:05:00 compute-0 nova_compute[189268]: 2025-11-22 09:05:00.419 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:05:00 compute-0 nova_compute[189268]: 2025-11-22 09:05:00.419 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:05:01 compute-0 openstack_network_exporter[205661]: ERROR   09:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:05:01 compute-0 openstack_network_exporter[205661]: ERROR   09:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:05:01 compute-0 openstack_network_exporter[205661]: ERROR   09:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:05:01 compute-0 openstack_network_exporter[205661]: ERROR   09:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:05:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:05:01 compute-0 openstack_network_exporter[205661]: ERROR   09:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:05:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:05:02 compute-0 nova_compute[189268]: 2025-11-22 09:05:02.844 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:03 compute-0 podman[259017]: 2025-11-22 09:05:03.158372939 +0000 UTC m=+0.100062193 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 09:05:03 compute-0 podman[259016]: 2025-11-22 09:05:03.173347343 +0000 UTC m=+0.113857265 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:05:03 compute-0 podman[259018]: 2025-11-22 09:05:03.17324647 +0000 UTC m=+0.105843850 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 09:05:04 compute-0 nova_compute[189268]: 2025-11-22 09:05:04.423 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:05 compute-0 nova_compute[189268]: 2025-11-22 09:05:05.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:06 compute-0 nova_compute[189268]: 2025-11-22 09:05:06.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:07 compute-0 nova_compute[189268]: 2025-11-22 09:05:07.847 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:09 compute-0 nova_compute[189268]: 2025-11-22 09:05:09.425 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:05:10.012 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:05:10.013 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:05:10.013 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:12 compute-0 nova_compute[189268]: 2025-11-22 09:05:12.848 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:14 compute-0 nova_compute[189268]: 2025-11-22 09:05:14.428 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:16 compute-0 podman[259074]: 2025-11-22 09:05:16.128378085 +0000 UTC m=+0.081116694 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler)
Nov 22 09:05:16 compute-0 podman[259082]: 2025-11-22 09:05:16.135712612 +0000 UTC m=+0.068714070 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:05:16 compute-0 podman[259075]: 2025-11-22 09:05:16.148727632 +0000 UTC m=+0.093617561 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:05:16 compute-0 podman[259076]: 2025-11-22 09:05:16.159559494 +0000 UTC m=+0.099287313 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 09:05:17 compute-0 nova_compute[189268]: 2025-11-22 09:05:17.852 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.208 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.282 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.283 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.351 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.359 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.439 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.440 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.504 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.928 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.930 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4991MB free_disk=72.36603164672852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.930 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:05:18 compute-0 nova_compute[189268]: 2025-11-22 09:05:18.931 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.020 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.020 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.021 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.021 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.085 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.099 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.101 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.101 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:05:19 compute-0 nova_compute[189268]: 2025-11-22 09:05:19.432 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:22 compute-0 nova_compute[189268]: 2025-11-22 09:05:22.857 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:24 compute-0 podman[259167]: 2025-11-22 09:05:24.116652215 +0000 UTC m=+0.072942394 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 22 09:05:24 compute-0 nova_compute[189268]: 2025-11-22 09:05:24.438 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:26 compute-0 podman[259188]: 2025-11-22 09:05:26.124716283 +0000 UTC m=+0.070001095 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 09:05:27 compute-0 nova_compute[189268]: 2025-11-22 09:05:27.860 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:29 compute-0 nova_compute[189268]: 2025-11-22 09:05:29.441 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:29 compute-0 podman[203476]: time="2025-11-22T09:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:05:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:05:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 22 09:05:31 compute-0 openstack_network_exporter[205661]: ERROR   09:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:05:31 compute-0 openstack_network_exporter[205661]: ERROR   09:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:05:31 compute-0 openstack_network_exporter[205661]: ERROR   09:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:05:31 compute-0 openstack_network_exporter[205661]: ERROR   09:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:05:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:05:31 compute-0 openstack_network_exporter[205661]: ERROR   09:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:05:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:05:32 compute-0 nova_compute[189268]: 2025-11-22 09:05:32.862 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:34 compute-0 podman[259213]: 2025-11-22 09:05:34.115719493 +0000 UTC m=+0.069517151 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 09:05:34 compute-0 podman[259214]: 2025-11-22 09:05:34.119005082 +0000 UTC m=+0.068376721 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:05:34 compute-0 podman[259212]: 2025-11-22 09:05:34.136838841 +0000 UTC m=+0.097171016 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 09:05:34 compute-0 nova_compute[189268]: 2025-11-22 09:05:34.444 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:37 compute-0 nova_compute[189268]: 2025-11-22 09:05:37.865 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:39 compute-0 nova_compute[189268]: 2025-11-22 09:05:39.447 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:42 compute-0 nova_compute[189268]: 2025-11-22 09:05:42.867 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:44 compute-0 nova_compute[189268]: 2025-11-22 09:05:44.451 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:47 compute-0 podman[259274]: 2025-11-22 09:05:47.120659755 +0000 UTC m=+0.067157198 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, release-0.7.12=, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm)
Nov 22 09:05:47 compute-0 podman[259277]: 2025-11-22 09:05:47.129118212 +0000 UTC m=+0.065950295 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 09:05:47 compute-0 podman[259276]: 2025-11-22 09:05:47.160091326 +0000 UTC m=+0.100037394 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true)
Nov 22 09:05:47 compute-0 podman[259275]: 2025-11-22 09:05:47.200817572 +0000 UTC m=+0.144391247 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 22 09:05:47 compute-0 nova_compute[189268]: 2025-11-22 09:05:47.869 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:49 compute-0 nova_compute[189268]: 2025-11-22 09:05:49.454 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:51 compute-0 nova_compute[189268]: 2025-11-22 09:05:51.097 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:52 compute-0 nova_compute[189268]: 2025-11-22 09:05:52.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:52 compute-0 nova_compute[189268]: 2025-11-22 09:05:52.872 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:54 compute-0 nova_compute[189268]: 2025-11-22 09:05:54.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:54 compute-0 nova_compute[189268]: 2025-11-22 09:05:54.457 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:55 compute-0 podman[259356]: 2025-11-22 09:05:55.114206533 +0000 UTC m=+0.069013399 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 22 09:05:57 compute-0 podman[259378]: 2025-11-22 09:05:57.100478764 +0000 UTC m=+0.053991394 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 09:05:57 compute-0 nova_compute[189268]: 2025-11-22 09:05:57.876 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:58 compute-0 nova_compute[189268]: 2025-11-22 09:05:58.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:58 compute-0 nova_compute[189268]: 2025-11-22 09:05:58.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:05:59 compute-0 nova_compute[189268]: 2025-11-22 09:05:59.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:05:59 compute-0 nova_compute[189268]: 2025-11-22 09:05:59.460 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:05:59 compute-0 podman[203476]: time="2025-11-22T09:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:05:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:05:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 22 09:06:01 compute-0 nova_compute[189268]: 2025-11-22 09:06:01.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:01 compute-0 nova_compute[189268]: 2025-11-22 09:06:01.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:06:01 compute-0 openstack_network_exporter[205661]: ERROR   09:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:06:01 compute-0 openstack_network_exporter[205661]: ERROR   09:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:06:01 compute-0 openstack_network_exporter[205661]: ERROR   09:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:06:01 compute-0 openstack_network_exporter[205661]: ERROR   09:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:06:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:06:01 compute-0 openstack_network_exporter[205661]: ERROR   09:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:06:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:06:01 compute-0 nova_compute[189268]: 2025-11-22 09:06:01.499 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:06:01 compute-0 nova_compute[189268]: 2025-11-22 09:06:01.500 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:06:01 compute-0 nova_compute[189268]: 2025-11-22 09:06:01.500 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:06:02 compute-0 nova_compute[189268]: 2025-11-22 09:06:02.876 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:02 compute-0 nova_compute[189268]: 2025-11-22 09:06:02.921 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updating instance_info_cache with network_info: [{"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:06:02 compute-0 nova_compute[189268]: 2025-11-22 09:06:02.959 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4809ca0d-4075-4d68-8ee7-5275c4253891" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:06:02 compute-0 nova_compute[189268]: 2025-11-22 09:06:02.960 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:06:04 compute-0 nova_compute[189268]: 2025-11-22 09:06:04.462 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:05 compute-0 podman[259403]: 2025-11-22 09:06:05.112027566 +0000 UTC m=+0.060237632 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 09:06:05 compute-0 podman[259402]: 2025-11-22 09:06:05.11329646 +0000 UTC m=+0.065268927 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:06:05 compute-0 podman[259404]: 2025-11-22 09:06:05.152616048 +0000 UTC m=+0.098030179 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 09:06:06 compute-0 nova_compute[189268]: 2025-11-22 09:06:06.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:07 compute-0 nova_compute[189268]: 2025-11-22 09:06:07.877 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:08 compute-0 nova_compute[189268]: 2025-11-22 09:06:08.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:09 compute-0 nova_compute[189268]: 2025-11-22 09:06:09.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:09 compute-0 nova_compute[189268]: 2025-11-22 09:06:09.465 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:06:10.013 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:06:10.014 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:06:10.014 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:12 compute-0 nova_compute[189268]: 2025-11-22 09:06:12.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:12 compute-0 nova_compute[189268]: 2025-11-22 09:06:12.112 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:06:12 compute-0 nova_compute[189268]: 2025-11-22 09:06:12.125 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:06:12 compute-0 nova_compute[189268]: 2025-11-22 09:06:12.880 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:14 compute-0 nova_compute[189268]: 2025-11-22 09:06:14.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:14 compute-0 nova_compute[189268]: 2025-11-22 09:06:14.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:06:14 compute-0 nova_compute[189268]: 2025-11-22 09:06:14.467 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:17 compute-0 nova_compute[189268]: 2025-11-22 09:06:17.883 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:18 compute-0 podman[259461]: 2025-11-22 09:06:18.12784665 +0000 UTC m=+0.080808675 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Nov 22 09:06:18 compute-0 podman[259463]: 2025-11-22 09:06:18.132695201 +0000 UTC m=+0.073838868 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 22 09:06:18 compute-0 podman[259468]: 2025-11-22 09:06:18.158166947 +0000 UTC m=+0.097965008 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:06:18 compute-0 podman[259462]: 2025-11-22 09:06:18.198759509 +0000 UTC m=+0.142053423 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.138 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.139 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.139 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.140 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.220 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.288 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.289 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.356 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.363 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.428 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.430 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.470 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.497 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.841 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.842 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4977MB free_disk=72.36603164672852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.843 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:19 compute-0 nova_compute[189268]: 2025-11-22 09:06:19.843 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.031 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.032 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.032 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.032 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.116 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.197 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.198 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.212 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.232 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.294 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.309 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.311 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:06:20 compute-0 nova_compute[189268]: 2025-11-22 09:06:20.311 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:21 compute-0 sshd-session[259552]: Invalid user oracle from 80.94.92.164 port 44354
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.100 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.101 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808c6d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.106 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'name': 'te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.109 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4809ca0d-4075-4d68-8ee7-5275c4253891', 'name': 'te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p', 'flavor': {'id': '60cc47c3-347f-4964-bb52-9bef8d0548a9', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0f738201-0a54-4f17-a455-df9aa7963f79'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'user_id': '37215e9bc58040aeb55ccd7e534b2a8c', 'hostId': '44bfd8cb608e8e36740e229fabc76c7785419d24d05fef040bbf4521', 'status': 'active', 'metadata': {'metering.server_group': 'e65dbf71-31dd-495a-8544-26d84c5284b3'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.109 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.109 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.109 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.110 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-22T09:06:22.110021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.113 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.117 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.117 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.117 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.117 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.117 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.117 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.117 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.118 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-22T09:06:22.117864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.118 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.118 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.118 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.118 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.118 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.119 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.119 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.119 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.119 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.119 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.119 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.120 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.120 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.120 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.120 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-22T09:06:22.119101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.120 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.120 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.121 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.121 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.121 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.121 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.121 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.121 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-22T09:06:22.120405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-22T09:06:22.121674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.143 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/cpu volume: 337290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 sshd-session[259552]: Connection closed by invalid user oracle 80.94.92.164 port 44354 [preauth]
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.162 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/cpu volume: 336750000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.163 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.163 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.163 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.163 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.164 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-22T09:06:22.163770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.164 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.164 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.165 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.165 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.165 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.165 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.165 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-22T09:06:22.165404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.178 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.178 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.190 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.190 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.191 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.191 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.191 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.191 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.192 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-22T09:06:22.191956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.228 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.229 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.264 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 31488512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.264 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.264 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.265 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.265 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.265 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.265 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.265 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.265 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.266 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.266 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.266 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-22T09:06:22.265505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.266 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.266 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-22T09:06:22.267215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.267 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.268 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.268 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.268 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.268 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.268 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.268 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.268 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.269 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 1863604470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-22T09:06:22.268841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.269 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.latency volume: 205964976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.269 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 2920726179 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.269 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.latency volume: 283496841 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.270 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.271 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.271 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.271 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.271 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.271 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.271 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.271 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 355 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-22T09:06:22.270539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-22T09:06:22.271747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.272 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.272 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.272 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.272 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.273 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.274 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.274 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.274 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-22T09:06:22.273384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.274 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.274 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.275 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.275 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.275 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.275 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.275 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-22T09:06:22.275095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.276 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-22T09:06:22.276767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.277 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.277 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.277 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.278 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-22T09:06:22.278429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 65236545384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-22T09:06:22.279690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.280 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.280 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 91026079656 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.280 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-22T09:06:22.281299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.281 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.282 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.282 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.282 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.282 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-22T09:06:22.282254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.283 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.283 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.283 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-22T09:06:22.283508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.284 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-22T09:06:22.284424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-22T09:06:22.285606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.286 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.287 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.287 15 DEBUG ceilometer.compute.pollsters [-] 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/memory.usage volume: 42.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-22T09:06:22.286978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.287 15 DEBUG ceilometer.compute.pollsters [-] 4809ca0d-4075-4d68-8ee7-5275c4253891/memory.usage volume: 42.421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.287 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:06:22.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:06:22 compute-0 nova_compute[189268]: 2025-11-22 09:06:22.885 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:23 compute-0 nova_compute[189268]: 2025-11-22 09:06:23.294 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:24 compute-0 nova_compute[189268]: 2025-11-22 09:06:24.474 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:26 compute-0 podman[259555]: 2025-11-22 09:06:26.115927887 +0000 UTC m=+0.076028537 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-type=git, version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 09:06:27 compute-0 nova_compute[189268]: 2025-11-22 09:06:27.886 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:28 compute-0 podman[259576]: 2025-11-22 09:06:28.100051159 +0000 UTC m=+0.059702278 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 09:06:29 compute-0 nova_compute[189268]: 2025-11-22 09:06:29.478 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:29 compute-0 podman[203476]: time="2025-11-22T09:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:06:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:06:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Nov 22 09:06:31 compute-0 openstack_network_exporter[205661]: ERROR   09:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:06:31 compute-0 openstack_network_exporter[205661]: ERROR   09:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:06:31 compute-0 openstack_network_exporter[205661]: ERROR   09:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:06:31 compute-0 openstack_network_exporter[205661]: ERROR   09:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:06:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:06:31 compute-0 openstack_network_exporter[205661]: ERROR   09:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:06:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:06:32 compute-0 nova_compute[189268]: 2025-11-22 09:06:32.889 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:34 compute-0 nova_compute[189268]: 2025-11-22 09:06:34.481 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:36 compute-0 podman[259601]: 2025-11-22 09:06:36.13722314 +0000 UTC m=+0.085051320 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 22 09:06:36 compute-0 podman[259600]: 2025-11-22 09:06:36.141888255 +0000 UTC m=+0.096869767 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 09:06:36 compute-0 podman[259599]: 2025-11-22 09:06:36.150330823 +0000 UTC m=+0.109267212 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:06:37 compute-0 nova_compute[189268]: 2025-11-22 09:06:37.891 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:39 compute-0 nova_compute[189268]: 2025-11-22 09:06:39.485 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:42 compute-0 nova_compute[189268]: 2025-11-22 09:06:42.893 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:44 compute-0 nova_compute[189268]: 2025-11-22 09:06:44.489 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:47 compute-0 nova_compute[189268]: 2025-11-22 09:06:47.894 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:49 compute-0 podman[259656]: 2025-11-22 09:06:49.117969016 +0000 UTC m=+0.076403498 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9)
Nov 22 09:06:49 compute-0 podman[259657]: 2025-11-22 09:06:49.138660883 +0000 UTC m=+0.094579327 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:06:49 compute-0 podman[259658]: 2025-11-22 09:06:49.14377235 +0000 UTC m=+0.094057492 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a)
Nov 22 09:06:49 compute-0 podman[259659]: 2025-11-22 09:06:49.14379545 +0000 UTC m=+0.089068197 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:06:49 compute-0 nova_compute[189268]: 2025-11-22 09:06:49.492 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:50 compute-0 nova_compute[189268]: 2025-11-22 09:06:50.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.351 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.383 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.384 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Triggering sync for uuid 4809ca0d-4075-4d68-8ee7-5275c4253891 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.385 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.386 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.387 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.388 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.417 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:51 compute-0 nova_compute[189268]: 2025-11-22 09:06:51.422 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:06:52 compute-0 nova_compute[189268]: 2025-11-22 09:06:52.136 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:52 compute-0 nova_compute[189268]: 2025-11-22 09:06:52.898 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:54 compute-0 nova_compute[189268]: 2025-11-22 09:06:54.496 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:56 compute-0 nova_compute[189268]: 2025-11-22 09:06:56.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:06:57 compute-0 podman[259739]: 2025-11-22 09:06:57.131073462 +0000 UTC m=+0.074171827 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 22 09:06:57 compute-0 nova_compute[189268]: 2025-11-22 09:06:57.901 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:59 compute-0 podman[259760]: 2025-11-22 09:06:59.136693005 +0000 UTC m=+0.087672580 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 09:06:59 compute-0 nova_compute[189268]: 2025-11-22 09:06:59.500 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:06:59 compute-0 podman[203476]: time="2025-11-22T09:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:06:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:06:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 22 09:07:00 compute-0 nova_compute[189268]: 2025-11-22 09:07:00.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:00 compute-0 nova_compute[189268]: 2025-11-22 09:07:00.098 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:07:01 compute-0 nova_compute[189268]: 2025-11-22 09:07:01.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:01 compute-0 openstack_network_exporter[205661]: ERROR   09:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:07:01 compute-0 openstack_network_exporter[205661]: ERROR   09:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:07:01 compute-0 openstack_network_exporter[205661]: ERROR   09:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:07:01 compute-0 openstack_network_exporter[205661]: ERROR   09:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:07:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:07:01 compute-0 openstack_network_exporter[205661]: ERROR   09:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:07:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:07:02 compute-0 nova_compute[189268]: 2025-11-22 09:07:02.903 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:03 compute-0 nova_compute[189268]: 2025-11-22 09:07:03.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:03 compute-0 nova_compute[189268]: 2025-11-22 09:07:03.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:07:03 compute-0 nova_compute[189268]: 2025-11-22 09:07:03.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:07:03 compute-0 nova_compute[189268]: 2025-11-22 09:07:03.519 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 09:07:03 compute-0 nova_compute[189268]: 2025-11-22 09:07:03.519 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquired lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 09:07:03 compute-0 nova_compute[189268]: 2025-11-22 09:07:03.520 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 09:07:03 compute-0 nova_compute[189268]: 2025-11-22 09:07:03.520 189273 DEBUG nova.objects.instance [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:04 compute-0 nova_compute[189268]: 2025-11-22 09:07:04.505 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:05 compute-0 nova_compute[189268]: 2025-11-22 09:07:05.272 189273 DEBUG nova.network.neutron [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [{"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:05 compute-0 nova_compute[189268]: 2025-11-22 09:07:05.286 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Releasing lock "refresh_cache-4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 09:07:05 compute-0 nova_compute[189268]: 2025-11-22 09:07:05.286 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 09:07:07 compute-0 nova_compute[189268]: 2025-11-22 09:07:07.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:07 compute-0 podman[259786]: 2025-11-22 09:07:07.143916731 +0000 UTC m=+0.076484859 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:07:07 compute-0 podman[259784]: 2025-11-22 09:07:07.155802091 +0000 UTC m=+0.094030281 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:07:07 compute-0 podman[259785]: 2025-11-22 09:07:07.185968382 +0000 UTC m=+0.113095454 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:07:07 compute-0 nova_compute[189268]: 2025-11-22 09:07:07.905 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:08 compute-0 nova_compute[189268]: 2025-11-22 09:07:08.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:09 compute-0 nova_compute[189268]: 2025-11-22 09:07:09.508 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:10.015 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:10.016 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:10.016 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:12 compute-0 nova_compute[189268]: 2025-11-22 09:07:12.907 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:14 compute-0 nova_compute[189268]: 2025-11-22 09:07:14.511 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:17 compute-0 nova_compute[189268]: 2025-11-22 09:07:17.909 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:19 compute-0 nova_compute[189268]: 2025-11-22 09:07:19.515 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:20 compute-0 podman[259848]: 2025-11-22 09:07:20.13786968 +0000 UTC m=+0.079660915 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:07:20 compute-0 podman[259845]: 2025-11-22 09:07:20.154512928 +0000 UTC m=+0.107673849 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9)
Nov 22 09:07:20 compute-0 podman[259847]: 2025-11-22 09:07:20.155728599 +0000 UTC m=+0.100060903 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 22 09:07:20 compute-0 podman[259846]: 2025-11-22 09:07:20.192604262 +0000 UTC m=+0.142348382 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.121 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.121 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.122 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.198 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.260 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.261 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.324 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.330 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.393 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.394 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.457 189273 DEBUG oslo_concurrency.processutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.818 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.819 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4909MB free_disk=72.36600875854492GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.820 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.820 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.904 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.905 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Instance 4809ca0d-4075-4d68-8ee7-5275c4253891 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.905 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.905 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.978 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.995 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.996 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:07:21 compute-0 nova_compute[189268]: 2025-11-22 09:07:21.997 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:22 compute-0 nova_compute[189268]: 2025-11-22 09:07:22.912 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:24 compute-0 nova_compute[189268]: 2025-11-22 09:07:24.517 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:27 compute-0 nova_compute[189268]: 2025-11-22 09:07:27.915 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:28 compute-0 podman[259940]: 2025-11-22 09:07:28.10019017 +0000 UTC m=+0.061539958 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Nov 22 09:07:29 compute-0 nova_compute[189268]: 2025-11-22 09:07:29.521 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:29 compute-0 podman[203476]: time="2025-11-22T09:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:07:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:07:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Nov 22 09:07:30 compute-0 podman[259961]: 2025-11-22 09:07:30.147830104 +0000 UTC m=+0.095669235 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 09:07:31 compute-0 openstack_network_exporter[205661]: ERROR   09:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:07:31 compute-0 openstack_network_exporter[205661]: ERROR   09:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:07:31 compute-0 openstack_network_exporter[205661]: ERROR   09:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:07:31 compute-0 openstack_network_exporter[205661]: ERROR   09:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:07:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:07:31 compute-0 openstack_network_exporter[205661]: ERROR   09:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:07:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:07:32 compute-0 nova_compute[189268]: 2025-11-22 09:07:32.917 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:34 compute-0 nova_compute[189268]: 2025-11-22 09:07:34.525 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:37 compute-0 nova_compute[189268]: 2025-11-22 09:07:37.920 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:38 compute-0 podman[259985]: 2025-11-22 09:07:38.119135219 +0000 UTC m=+0.061104365 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 09:07:38 compute-0 podman[259984]: 2025-11-22 09:07:38.153157815 +0000 UTC m=+0.100310190 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 09:07:38 compute-0 podman[259986]: 2025-11-22 09:07:38.164625694 +0000 UTC m=+0.102764077 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 09:07:39 compute-0 nova_compute[189268]: 2025-11-22 09:07:39.528 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:42 compute-0 nova_compute[189268]: 2025-11-22 09:07:42.923 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:44 compute-0 nova_compute[189268]: 2025-11-22 09:07:44.531 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:47 compute-0 nova_compute[189268]: 2025-11-22 09:07:47.925 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:49 compute-0 nova_compute[189268]: 2025-11-22 09:07:49.534 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:51 compute-0 podman[260044]: 2025-11-22 09:07:51.147521979 +0000 UTC m=+0.090111025 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, version=9.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 22 09:07:51 compute-0 podman[260046]: 2025-11-22 09:07:51.156995744 +0000 UTC m=+0.087255098 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:07:51 compute-0 podman[260047]: 2025-11-22 09:07:51.167333212 +0000 UTC m=+0.098239424 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:07:51 compute-0 podman[260045]: 2025-11-22 09:07:51.189567451 +0000 UTC m=+0.129580518 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Nov 22 09:07:52 compute-0 nova_compute[189268]: 2025-11-22 09:07:52.928 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:52 compute-0 nova_compute[189268]: 2025-11-22 09:07:52.992 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.175 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.176 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.177 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.177 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.177 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.179 189273 INFO nova.compute.manager [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Terminating instance
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.180 189273 DEBUG nova.compute.manager [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:07:54 compute-0 kernel: taped7b62da-e4 (unregistering): left promiscuous mode
Nov 22 09:07:54 compute-0 NetworkManager[56326]: <info>  [1763802474.2105] device (taped7b62da-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:07:54 compute-0 ovn_controller[97783]: 2025-11-22T09:07:54Z|00176|binding|INFO|Releasing lport ed7b62da-e420-4250-acdc-71cedcdde8ed from this chassis (sb_readonly=0)
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.221 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 ovn_controller[97783]: 2025-11-22T09:07:54Z|00177|binding|INFO|Setting lport ed7b62da-e420-4250-acdc-71cedcdde8ed down in Southbound
Nov 22 09:07:54 compute-0 ovn_controller[97783]: 2025-11-22T09:07:54Z|00178|binding|INFO|Removing iface taped7b62da-e4 ovn-installed in OVS
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.226 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.237 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.238 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:a4:4f 10.100.3.45'], port_security=['fa:16:3e:84:a4:4f 10.100.3.45'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.45/16', 'neutron:device_id': '4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c782ed20-231b-4e59-ad25-952e10372407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5efbe77c-7f0b-4c5a-a729-30b470e68fec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=ed7b62da-e420-4250-acdc-71cedcdde8ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.239 106642 INFO neutron.agent.ovn.metadata.agent [-] Port ed7b62da-e420-4250-acdc-71cedcdde8ed in datapath 8ee541ea-f059-4138-b6cf-87ec84c3e9f8 unbound from our chassis
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.240 106642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8ee541ea-f059-4138-b6cf-87ec84c3e9f8
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.257 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[62d5940a-acc9-4ee0-b55b-9cdf262f4310]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:54 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 22 09:07:54 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 7min 10.684s CPU time.
Nov 22 09:07:54 compute-0 systemd-machined[155703]: Machine qemu-16-instance-0000000f terminated.
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.284 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd194ea-6f56-4342-8def-0a0730c9b22f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.288 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[abf5f7db-75a6-4cfd-a285-1f17fc1e5cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.316 239736 DEBUG oslo.privsep.daemon [-] privsep: reply[79049af4-ba27-4f8a-9301-ebc5511ceea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.333 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0f1138-dd49-42dc-8526-0c847fa9ce47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ee541ea-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:36:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672040, 'reachable_time': 31260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260138, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.349 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[04edf140-caf3-4de9-bbb1-d23823f17912]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap8ee541ea-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672051, 'tstamp': 672051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260139, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8ee541ea-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672053, 'tstamp': 672053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260139, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.351 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ee541ea-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.353 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.359 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.360 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ee541ea-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.360 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.360 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8ee541ea-f0, col_values=(('external_ids', {'iface-id': 'cddd47d2-111c-4ed1-83df-9f3b0e628d26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.360 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.403 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.410 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.444 189273 INFO nova.virt.libvirt.driver [-] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Instance destroyed successfully.
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.445 189273 DEBUG nova.objects.instance [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lazy-loading 'resources' on Instance uuid 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.462 189273 DEBUG nova.virt.libvirt.vif [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:53:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-1646439-asg-gba3vv6vgk7b-pyfpxkab6lyv-kmygrtfd6yvn',id=15,image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:53:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e65dbf71-31dd-495a-8544-26d84c5284b3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6872b219a7f441adb7db6dc2b4e66fd7',ramdisk_id='',reservation_id='r-eyix9rv8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1457752866',owner_user_name='tempest-PrometheusGabbiTest-1457752866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:53:27Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='37215e9bc58040aeb55ccd7e534b2a8c',uuid=4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.462 189273 DEBUG nova.network.os_vif_util [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converting VIF {"id": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "address": "fa:16:3e:84:a4:4f", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.45", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped7b62da-e4", "ovs_interfaceid": "ed7b62da-e420-4250-acdc-71cedcdde8ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.463 189273 DEBUG nova.network.os_vif_util [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:a4:4f,bridge_name='br-int',has_traffic_filtering=True,id=ed7b62da-e420-4250-acdc-71cedcdde8ed,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped7b62da-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.463 189273 DEBUG os_vif [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:a4:4f,bridge_name='br-int',has_traffic_filtering=True,id=ed7b62da-e420-4250-acdc-71cedcdde8ed,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped7b62da-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.465 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.466 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped7b62da-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.467 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.469 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.470 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.473 189273 INFO os_vif [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:a4:4f,bridge_name='br-int',has_traffic_filtering=True,id=ed7b62da-e420-4250-acdc-71cedcdde8ed,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped7b62da-e4')
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.474 189273 INFO nova.virt.libvirt.driver [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Deleting instance files /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5_del
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.474 189273 INFO nova.virt.libvirt.driver [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Deletion of /var/lib/nova/instances/4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5_del complete
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.482 189273 DEBUG nova.compute.manager [req-635e0343-c864-4f33-b68d-fa3cfe9e0c0e req-e641de30-a276-426b-8ef3-b018b358df23 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received event network-vif-unplugged-ed7b62da-e420-4250-acdc-71cedcdde8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.483 189273 DEBUG oslo_concurrency.lockutils [req-635e0343-c864-4f33-b68d-fa3cfe9e0c0e req-e641de30-a276-426b-8ef3-b018b358df23 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.483 189273 DEBUG oslo_concurrency.lockutils [req-635e0343-c864-4f33-b68d-fa3cfe9e0c0e req-e641de30-a276-426b-8ef3-b018b358df23 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.484 189273 DEBUG oslo_concurrency.lockutils [req-635e0343-c864-4f33-b68d-fa3cfe9e0c0e req-e641de30-a276-426b-8ef3-b018b358df23 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.484 189273 DEBUG nova.compute.manager [req-635e0343-c864-4f33-b68d-fa3cfe9e0c0e req-e641de30-a276-426b-8ef3-b018b358df23 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] No waiting events found dispatching network-vif-unplugged-ed7b62da-e420-4250-acdc-71cedcdde8ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.485 189273 DEBUG nova.compute.manager [req-635e0343-c864-4f33-b68d-fa3cfe9e0c0e req-e641de30-a276-426b-8ef3-b018b358df23 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received event network-vif-unplugged-ed7b62da-e420-4250-acdc-71cedcdde8ed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.529 189273 INFO nova.compute.manager [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Took 0.35 seconds to destroy the instance on the hypervisor.
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.530 189273 DEBUG oslo.service.loopingcall [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.530 189273 DEBUG nova.compute.manager [-] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.531 189273 DEBUG nova.network.neutron [-] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.717 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:07:54 compute-0 nova_compute[189268]: 2025-11-22 09:07:54.718 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.718 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:07:54 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:07:54.719 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:07:55 compute-0 nova_compute[189268]: 2025-11-22 09:07:55.766 189273 DEBUG nova.network.neutron [-] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:07:55 compute-0 nova_compute[189268]: 2025-11-22 09:07:55.851 189273 INFO nova.compute.manager [-] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Took 1.32 seconds to deallocate network for instance.
Nov 22 09:07:55 compute-0 nova_compute[189268]: 2025-11-22 09:07:55.926 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:55 compute-0 nova_compute[189268]: 2025-11-22 09:07:55.927 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:55 compute-0 nova_compute[189268]: 2025-11-22 09:07:55.954 189273 DEBUG nova.compute.manager [req-2bc0c68b-8b68-4608-acb9-f440266c237f req-a069c3ca-7651-45df-99e0-c45a036921d2 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received event network-vif-deleted-ed7b62da-e420-4250-acdc-71cedcdde8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.007 189273 DEBUG nova.compute.provider_tree [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.020 189273 DEBUG nova.scheduler.client.report [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.043 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.087 189273 INFO nova.scheduler.client.report [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Deleted allocations for instance 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.158 189273 DEBUG oslo_concurrency.lockutils [None req-f0b06a09-5d37-45f7-90fa-07b1e6f1b24b 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.584 189273 DEBUG nova.compute.manager [req-28bc4dea-9d7f-461e-b7c6-fde47fdcd32a req-8a528000-8c5b-4ab4-84e3-0105f3eb38b6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received event network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.584 189273 DEBUG oslo_concurrency.lockutils [req-28bc4dea-9d7f-461e-b7c6-fde47fdcd32a req-8a528000-8c5b-4ab4-84e3-0105f3eb38b6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.584 189273 DEBUG oslo_concurrency.lockutils [req-28bc4dea-9d7f-461e-b7c6-fde47fdcd32a req-8a528000-8c5b-4ab4-84e3-0105f3eb38b6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.585 189273 DEBUG oslo_concurrency.lockutils [req-28bc4dea-9d7f-461e-b7c6-fde47fdcd32a req-8a528000-8c5b-4ab4-84e3-0105f3eb38b6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.585 189273 DEBUG nova.compute.manager [req-28bc4dea-9d7f-461e-b7c6-fde47fdcd32a req-8a528000-8c5b-4ab4-84e3-0105f3eb38b6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] No waiting events found dispatching network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:07:56 compute-0 nova_compute[189268]: 2025-11-22 09:07:56.585 189273 WARNING nova.compute.manager [req-28bc4dea-9d7f-461e-b7c6-fde47fdcd32a req-8a528000-8c5b-4ab4-84e3-0105f3eb38b6 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Received unexpected event network-vif-plugged-ed7b62da-e420-4250-acdc-71cedcdde8ed for instance with vm_state deleted and task_state None.
Nov 22 09:07:57 compute-0 nova_compute[189268]: 2025-11-22 09:07:57.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:07:57 compute-0 nova_compute[189268]: 2025-11-22 09:07:57.932 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:59 compute-0 podman[260157]: 2025-11-22 09:07:59.132727232 +0000 UTC m=+0.082117100 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:07:59 compute-0 nova_compute[189268]: 2025-11-22 09:07:59.468 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:07:59 compute-0 podman[203476]: time="2025-11-22T09:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:07:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 22 09:07:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 22 09:08:00 compute-0 nova_compute[189268]: 2025-11-22 09:08:00.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:00 compute-0 nova_compute[189268]: 2025-11-22 09:08:00.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:08:01 compute-0 podman[260178]: 2025-11-22 09:08:01.117427892 +0000 UTC m=+0.064760895 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 09:08:01 compute-0 openstack_network_exporter[205661]: ERROR   09:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:08:01 compute-0 openstack_network_exporter[205661]: ERROR   09:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:08:01 compute-0 openstack_network_exporter[205661]: ERROR   09:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:08:01 compute-0 openstack_network_exporter[205661]: ERROR   09:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:08:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:08:01 compute-0 openstack_network_exporter[205661]: ERROR   09:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:08:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.935 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.945 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.946 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.946 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.947 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.947 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.948 189273 INFO nova.compute.manager [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Terminating instance
Nov 22 09:08:02 compute-0 nova_compute[189268]: 2025-11-22 09:08:02.949 189273 DEBUG nova.compute.manager [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 09:08:02 compute-0 kernel: tap9ec3e8b1-78 (unregistering): left promiscuous mode
Nov 22 09:08:02 compute-0 NetworkManager[56326]: <info>  [1763802482.9909] device (tap9ec3e8b1-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.000 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:03 compute-0 ovn_controller[97783]: 2025-11-22T09:08:03Z|00179|binding|INFO|Releasing lport 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 from this chassis (sb_readonly=0)
Nov 22 09:08:03 compute-0 ovn_controller[97783]: 2025-11-22T09:08:03Z|00180|binding|INFO|Setting lport 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 down in Southbound
Nov 22 09:08:03 compute-0 ovn_controller[97783]: 2025-11-22T09:08:03Z|00181|binding|INFO|Removing iface tap9ec3e8b1-78 ovn-installed in OVS
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.006 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:03 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:03.023 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:e6:af 10.100.3.103'], port_security=['fa:16:3e:5e:e6:af 10.100.3.103'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.103/16', 'neutron:device_id': '4809ca0d-4075-4d68-8ee7-5275c4253891', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6872b219a7f441adb7db6dc2b4e66fd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c782ed20-231b-4e59-ad25-952e10372407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5efbe77c-7f0b-4c5a-a729-30b470e68fec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>], logical_port=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f92b446ee20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.023 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:03 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:03.024 106642 INFO neutron.agent.ovn.metadata.agent [-] Port 9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 in datapath 8ee541ea-f059-4138-b6cf-87ec84c3e9f8 unbound from our chassis
Nov 22 09:08:03 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:03.025 106642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8ee541ea-f059-4138-b6cf-87ec84c3e9f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 09:08:03 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:03.026 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[647ebc36-ede3-47a8-a83c-a4fb73b1c390]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:03 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:03.027 106642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8 namespace which is not needed anymore
Nov 22 09:08:03 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 22 09:08:03 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Consumed 6min 39.774s CPU time.
Nov 22 09:08:03 compute-0 systemd-machined[155703]: Machine qemu-17-instance-00000010 terminated.
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.216 189273 INFO nova.virt.libvirt.driver [-] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Instance destroyed successfully.
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.218 189273 DEBUG nova.objects.instance [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lazy-loading 'resources' on Instance uuid 4809ca0d-4075-4d68-8ee7-5275c4253891 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.229 189273 DEBUG nova.virt.libvirt.vif [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T08:57:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-1646439-asg-gba3vv6vgk7b-tmn4otq576rq-xk2uuzpcqq5p',id=16,image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T08:58:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='e65dbf71-31dd-495a-8544-26d84c5284b3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6872b219a7f441adb7db6dc2b4e66fd7',ramdisk_id='',reservation_id='r-1xmx0z8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0f738201-0a54-4f17-a455-df9aa7963f79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1457752866',owner_user_name='tempest-PrometheusGabbiTest-1457752866-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T08:58:05Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='37215e9bc58040aeb55ccd7e534b2a8c',uuid=4809ca0d-4075-4d68-8ee7-5275c4253891,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.229 189273 DEBUG nova.network.os_vif_util [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converting VIF {"id": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "address": "fa:16:3e:5e:e6:af", "network": {"id": "8ee541ea-f059-4138-b6cf-87ec84c3e9f8", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.103", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6872b219a7f441adb7db6dc2b4e66fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ec3e8b1-78", "ovs_interfaceid": "9ec3e8b1-78a3-47e8-81c4-f0747a3e1915", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.230 189273 DEBUG nova.network.os_vif_util [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:e6:af,bridge_name='br-int',has_traffic_filtering=True,id=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec3e8b1-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.230 189273 DEBUG os_vif [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:e6:af,bridge_name='br-int',has_traffic_filtering=True,id=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec3e8b1-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.232 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.232 189273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec3e8b1-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.234 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.235 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.237 189273 INFO os_vif [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:e6:af,bridge_name='br-int',has_traffic_filtering=True,id=9ec3e8b1-78a3-47e8-81c4-f0747a3e1915,network=Network(8ee541ea-f059-4138-b6cf-87ec84c3e9f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ec3e8b1-78')
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.238 189273 INFO nova.virt.libvirt.driver [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Deleting instance files /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891_del
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.239 189273 INFO nova.virt.libvirt.driver [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Deletion of /var/lib/nova/instances/4809ca0d-4075-4d68-8ee7-5275c4253891_del complete
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.336 189273 INFO nova.compute.manager [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.337 189273 DEBUG oslo.service.loopingcall [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.337 189273 DEBUG nova.compute.manager [-] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 09:08:03 compute-0 nova_compute[189268]: 2025-11-22 09:08:03.338 189273 DEBUG nova.network.neutron [-] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 09:08:03 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [NOTICE]   (254678) : haproxy version is 2.8.14-c23fe91
Nov 22 09:08:03 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [NOTICE]   (254678) : path to executable is /usr/sbin/haproxy
Nov 22 09:08:03 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [WARNING]  (254678) : Exiting Master process...
Nov 22 09:08:03 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [WARNING]  (254678) : Exiting Master process...
Nov 22 09:08:03 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [ALERT]    (254678) : Current worker (254680) exited with code 143 (Terminated)
Nov 22 09:08:03 compute-0 neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8[254674]: [WARNING]  (254678) : All workers exited. Exiting... (0)
Nov 22 09:08:03 compute-0 systemd[1]: libpod-31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40.scope: Deactivated successfully.
Nov 22 09:08:03 compute-0 podman[260229]: 2025-11-22 09:08:03.451485572 +0000 UTC m=+0.271737884 container died 31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 09:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40-userdata-shm.mount: Deactivated successfully.
Nov 22 09:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0eaebbaa7eb018c7bb175994600596fd1157614da7078e2bc7bc5075d36020f-merged.mount: Deactivated successfully.
Nov 22 09:08:03 compute-0 podman[260229]: 2025-11-22 09:08:03.931605362 +0000 UTC m=+0.751857694 container cleanup 31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:08:03 compute-0 systemd[1]: libpod-conmon-31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40.scope: Deactivated successfully.
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.122 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.123 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:08:04 compute-0 podman[260275]: 2025-11-22 09:08:04.33136265 +0000 UTC m=+0.354987984 container remove 31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.339 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[4c64a953-fcda-4400-a5c6-a0488fbc6987]: (4, ('Sat Nov 22 09:08:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8 (31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40)\n31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40\nSat Nov 22 09:08:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8 (31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40)\n31363378f66a25ca199f40a6b5b370dfe3465a924f0c03ba7c321c77280dfe40\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.341 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[90a8acbe-2e12-4264-b262-6c3505007751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.342 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ee541ea-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.344 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:04 compute-0 kernel: tap8ee541ea-f0: left promiscuous mode
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.358 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.360 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.361 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[03fccac9-d8f1-4ef3-b591-163118c34959]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.373 189273 DEBUG nova.compute.manager [req-6b534f48-7016-41d5-b7a4-ae8f7cd8fce9 req-fdfab0c1-baed-41df-8e6e-ce6a4304459d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received event network-vif-unplugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.373 189273 DEBUG oslo_concurrency.lockutils [req-6b534f48-7016-41d5-b7a4-ae8f7cd8fce9 req-fdfab0c1-baed-41df-8e6e-ce6a4304459d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.373 189273 DEBUG oslo_concurrency.lockutils [req-6b534f48-7016-41d5-b7a4-ae8f7cd8fce9 req-fdfab0c1-baed-41df-8e6e-ce6a4304459d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.374 189273 DEBUG oslo_concurrency.lockutils [req-6b534f48-7016-41d5-b7a4-ae8f7cd8fce9 req-fdfab0c1-baed-41df-8e6e-ce6a4304459d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.374 189273 DEBUG nova.compute.manager [req-6b534f48-7016-41d5-b7a4-ae8f7cd8fce9 req-fdfab0c1-baed-41df-8e6e-ce6a4304459d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] No waiting events found dispatching network-vif-unplugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:08:04 compute-0 nova_compute[189268]: 2025-11-22 09:08:04.374 189273 DEBUG nova.compute.manager [req-6b534f48-7016-41d5-b7a4-ae8f7cd8fce9 req-fdfab0c1-baed-41df-8e6e-ce6a4304459d 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received event network-vif-unplugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.382 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[4402c4bc-bb24-4ed7-8152-27dd1f662a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.384 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[aabfe81d-267b-4623-a3dc-3ddf629c7977]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.399 239666 DEBUG oslo.privsep.daemon [-] privsep: reply[873a180c-3b43-4956-936b-22a03b84c283]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672032, 'reachable_time': 31657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260289, 'error': None, 'target': 'ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.402 106754 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8ee541ea-f059-4138-b6cf-87ec84c3e9f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 09:08:04 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:04.402 106754 DEBUG oslo.privsep.daemon [-] privsep: reply[b8624888-0261-48e8-bdd2-553af0ec6171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 09:08:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d8ee541ea\x2df059\x2d4138\x2db6cf\x2d87ec84c3e9f8.mount: Deactivated successfully.
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.510 189273 DEBUG nova.compute.manager [req-61caf13c-3c99-4522-84b8-a8d5ea1821b9 req-0cb5660f-178a-4964-90f2-3279f51bebe7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received event network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.511 189273 DEBUG oslo_concurrency.lockutils [req-61caf13c-3c99-4522-84b8-a8d5ea1821b9 req-0cb5660f-178a-4964-90f2-3279f51bebe7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Acquiring lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.511 189273 DEBUG oslo_concurrency.lockutils [req-61caf13c-3c99-4522-84b8-a8d5ea1821b9 req-0cb5660f-178a-4964-90f2-3279f51bebe7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.512 189273 DEBUG oslo_concurrency.lockutils [req-61caf13c-3c99-4522-84b8-a8d5ea1821b9 req-0cb5660f-178a-4964-90f2-3279f51bebe7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.512 189273 DEBUG nova.compute.manager [req-61caf13c-3c99-4522-84b8-a8d5ea1821b9 req-0cb5660f-178a-4964-90f2-3279f51bebe7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] No waiting events found dispatching network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.512 189273 WARNING nova.compute.manager [req-61caf13c-3c99-4522-84b8-a8d5ea1821b9 req-0cb5660f-178a-4964-90f2-3279f51bebe7 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received unexpected event network-vif-plugged-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 for instance with vm_state active and task_state deleting.
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.631 189273 DEBUG nova.network.neutron [-] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.657 189273 INFO nova.compute.manager [-] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Took 4.32 seconds to deallocate network for instance.
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.703 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.703 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.735 189273 DEBUG nova.compute.manager [req-dbaad763-2225-4d7d-8d74-769a089fbe88 req-b04d2284-96a5-4dc6-9b6d-07b929de68d3 76d1b167fd494abeb044425933e67b7a f614c682420c498497346af3334aafe2 - - default default] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Received event network-vif-deleted-9ec3e8b1-78a3-47e8-81c4-f0747a3e1915 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.794 189273 DEBUG nova.compute.provider_tree [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.807 189273 DEBUG nova.scheduler.client.report [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:08:07 compute-0 nova_compute[189268]: 2025-11-22 09:08:07.937 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:08 compute-0 nova_compute[189268]: 2025-11-22 09:08:08.229 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:08 compute-0 nova_compute[189268]: 2025-11-22 09:08:08.235 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:08 compute-0 nova_compute[189268]: 2025-11-22 09:08:08.284 189273 INFO nova.scheduler.client.report [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Deleted allocations for instance 4809ca0d-4075-4d68-8ee7-5275c4253891
Nov 22 09:08:08 compute-0 nova_compute[189268]: 2025-11-22 09:08:08.410 189273 DEBUG oslo_concurrency.lockutils [None req-a2440558-53ee-4ce6-939a-1716eacf6ff9 37215e9bc58040aeb55ccd7e534b2a8c 6872b219a7f441adb7db6dc2b4e66fd7 - - default default] Lock "4809ca0d-4075-4d68-8ee7-5275c4253891" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:09 compute-0 podman[260292]: 2025-11-22 09:08:09.116893963 +0000 UTC m=+0.061642230 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 09:08:09 compute-0 podman[260290]: 2025-11-22 09:08:09.126651626 +0000 UTC m=+0.078937445 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 22 09:08:09 compute-0 podman[260291]: 2025-11-22 09:08:09.141129955 +0000 UTC m=+0.090594398 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:08:09 compute-0 nova_compute[189268]: 2025-11-22 09:08:09.442 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802474.4411786, 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:09 compute-0 nova_compute[189268]: 2025-11-22 09:08:09.443 189273 INFO nova.compute.manager [-] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] VM Stopped (Lifecycle Event)
Nov 22 09:08:09 compute-0 nova_compute[189268]: 2025-11-22 09:08:09.462 189273 DEBUG nova.compute.manager [None req-335a19d6-a57b-41ba-8ec0-9a5458f9aa34 - - - - - -] [instance: 4abcb9e5-8c38-4bdf-b4ca-0b182b69e5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:10.016 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:10.017 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:08:10.017 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:10 compute-0 nova_compute[189268]: 2025-11-22 09:08:10.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:12 compute-0 nova_compute[189268]: 2025-11-22 09:08:12.939 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:13 compute-0 nova_compute[189268]: 2025-11-22 09:08:13.238 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:17 compute-0 nova_compute[189268]: 2025-11-22 09:08:17.942 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:18 compute-0 nova_compute[189268]: 2025-11-22 09:08:18.214 189273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802483.2128553, 4809ca0d-4075-4d68-8ee7-5275c4253891 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 09:08:18 compute-0 nova_compute[189268]: 2025-11-22 09:08:18.214 189273 INFO nova.compute.manager [-] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] VM Stopped (Lifecycle Event)
Nov 22 09:08:18 compute-0 nova_compute[189268]: 2025-11-22 09:08:18.231 189273 DEBUG nova.compute.manager [None req-fb76aa15-e77f-40e1-8f09-61f4de0d294a - - - - - -] [instance: 4809ca0d-4075-4d68-8ee7-5275c4253891] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 09:08:18 compute-0 nova_compute[189268]: 2025-11-22 09:08:18.240 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.100 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.101 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.101 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.103 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.104 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.109 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.111 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.111 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.112 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.112 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.112 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.112 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.113 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.113 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.113 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb808e54f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.112 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.113 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.113 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.114 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.115 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.116 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.116 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.116 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.116 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.116 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.116 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:08:22.116 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:08:22 compute-0 podman[260348]: 2025-11-22 09:08:22.148465033 +0000 UTC m=+0.098481331 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release-0.7.12=, maintainer=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 22 09:08:22 compute-0 podman[260353]: 2025-11-22 09:08:22.178080721 +0000 UTC m=+0.113886356 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 09:08:22 compute-0 podman[260350]: 2025-11-22 09:08:22.18625698 +0000 UTC m=+0.127779619 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 09:08:22 compute-0 podman[260349]: 2025-11-22 09:08:22.195047838 +0000 UTC m=+0.140778680 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.500 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.501 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5322MB free_disk=72.42321395874023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.501 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.501 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.580 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.581 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.604 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.618 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.670 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.671 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:08:22 compute-0 nova_compute[189268]: 2025-11-22 09:08:22.944 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:23 compute-0 nova_compute[189268]: 2025-11-22 09:08:23.243 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:23 compute-0 nova_compute[189268]: 2025-11-22 09:08:23.667 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:27 compute-0 nova_compute[189268]: 2025-11-22 09:08:27.946 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:28 compute-0 nova_compute[189268]: 2025-11-22 09:08:28.245 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:29 compute-0 podman[203476]: time="2025-11-22T09:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:08:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 09:08:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Nov 22 09:08:30 compute-0 podman[260432]: 2025-11-22 09:08:30.122701387 +0000 UTC m=+0.075624106 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Nov 22 09:08:31 compute-0 openstack_network_exporter[205661]: ERROR   09:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:08:31 compute-0 openstack_network_exporter[205661]: ERROR   09:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:08:31 compute-0 openstack_network_exporter[205661]: ERROR   09:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:08:31 compute-0 openstack_network_exporter[205661]: ERROR   09:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:08:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:08:31 compute-0 openstack_network_exporter[205661]: ERROR   09:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:08:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:08:32 compute-0 podman[260452]: 2025-11-22 09:08:32.149728376 +0000 UTC m=+0.089913941 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 09:08:32 compute-0 nova_compute[189268]: 2025-11-22 09:08:32.948 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:33 compute-0 nova_compute[189268]: 2025-11-22 09:08:33.247 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:37 compute-0 ovn_controller[97783]: 2025-11-22T09:08:37Z|00182|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Nov 22 09:08:37 compute-0 nova_compute[189268]: 2025-11-22 09:08:37.951 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:38 compute-0 nova_compute[189268]: 2025-11-22 09:08:38.249 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:40 compute-0 podman[260476]: 2025-11-22 09:08:40.133521906 +0000 UTC m=+0.082028099 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 09:08:40 compute-0 podman[260475]: 2025-11-22 09:08:40.138119559 +0000 UTC m=+0.078530634 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 22 09:08:40 compute-0 podman[260474]: 2025-11-22 09:08:40.153963025 +0000 UTC m=+0.101014609 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 09:08:42 compute-0 nova_compute[189268]: 2025-11-22 09:08:42.966 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:43 compute-0 nova_compute[189268]: 2025-11-22 09:08:43.250 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:47 compute-0 nova_compute[189268]: 2025-11-22 09:08:47.968 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:48 compute-0 nova_compute[189268]: 2025-11-22 09:08:48.253 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:52 compute-0 nova_compute[189268]: 2025-11-22 09:08:52.970 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:53 compute-0 podman[260533]: 2025-11-22 09:08:53.126655841 +0000 UTC m=+0.081111284 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, config_id=edpm, name=ubi9, release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 22 09:08:53 compute-0 podman[260541]: 2025-11-22 09:08:53.159468384 +0000 UTC m=+0.098365668 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 22 09:08:53 compute-0 podman[260535]: 2025-11-22 09:08:53.170373728 +0000 UTC m=+0.110845615 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 22 09:08:53 compute-0 podman[260534]: 2025-11-22 09:08:53.199176032 +0000 UTC m=+0.146868743 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:08:53 compute-0 nova_compute[189268]: 2025-11-22 09:08:53.255 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:54 compute-0 nova_compute[189268]: 2025-11-22 09:08:54.110 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:55 compute-0 nova_compute[189268]: 2025-11-22 09:08:55.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:57 compute-0 nova_compute[189268]: 2025-11-22 09:08:57.973 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:58 compute-0 nova_compute[189268]: 2025-11-22 09:08:58.256 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:08:59 compute-0 nova_compute[189268]: 2025-11-22 09:08:59.100 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:08:59 compute-0 podman[203476]: time="2025-11-22T09:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:08:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 09:08:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Nov 22 09:09:01 compute-0 podman[260614]: 2025-11-22 09:09:01.132084685 +0000 UTC m=+0.082966164 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Nov 22 09:09:01 compute-0 openstack_network_exporter[205661]: ERROR   09:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:09:01 compute-0 openstack_network_exporter[205661]: ERROR   09:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:09:01 compute-0 openstack_network_exporter[205661]: ERROR   09:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:09:01 compute-0 openstack_network_exporter[205661]: ERROR   09:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:09:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:09:01 compute-0 openstack_network_exporter[205661]: ERROR   09:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:09:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:09:02 compute-0 nova_compute[189268]: 2025-11-22 09:09:02.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:02 compute-0 nova_compute[189268]: 2025-11-22 09:09:02.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:09:02 compute-0 nova_compute[189268]: 2025-11-22 09:09:02.975 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:03 compute-0 podman[260636]: 2025-11-22 09:09:03.086051587 +0000 UTC m=+0.080485667 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 09:09:03 compute-0 nova_compute[189268]: 2025-11-22 09:09:03.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:03 compute-0 nova_compute[189268]: 2025-11-22 09:09:03.260 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:06 compute-0 nova_compute[189268]: 2025-11-22 09:09:06.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:06 compute-0 nova_compute[189268]: 2025-11-22 09:09:06.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:09:06 compute-0 nova_compute[189268]: 2025-11-22 09:09:06.100 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:09:06 compute-0 nova_compute[189268]: 2025-11-22 09:09:06.215 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:09:07 compute-0 nova_compute[189268]: 2025-11-22 09:09:07.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:07 compute-0 nova_compute[189268]: 2025-11-22 09:09:07.977 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:08 compute-0 nova_compute[189268]: 2025-11-22 09:09:08.263 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:09:10.017 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:09:10.018 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:09:10.018 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:11 compute-0 nova_compute[189268]: 2025-11-22 09:09:11.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:11 compute-0 podman[260661]: 2025-11-22 09:09:11.106263477 +0000 UTC m=+0.063313635 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 09:09:11 compute-0 podman[260662]: 2025-11-22 09:09:11.134502887 +0000 UTC m=+0.087942908 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 09:09:11 compute-0 podman[260660]: 2025-11-22 09:09:11.142158533 +0000 UTC m=+0.101309177 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:09:12 compute-0 nova_compute[189268]: 2025-11-22 09:09:12.980 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:13 compute-0 nova_compute[189268]: 2025-11-22 09:09:13.265 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:17 compute-0 nova_compute[189268]: 2025-11-22 09:09:17.983 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:18 compute-0 nova_compute[189268]: 2025-11-22 09:09:18.267 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:22 compute-0 nova_compute[189268]: 2025-11-22 09:09:22.984 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.122 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.123 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.124 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.269 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.467 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.468 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5362MB free_disk=72.42330551147461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.469 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.469 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.685 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.685 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.709 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.722 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.724 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:09:23 compute-0 nova_compute[189268]: 2025-11-22 09:09:23.724 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:09:24 compute-0 podman[260721]: 2025-11-22 09:09:24.139214115 +0000 UTC m=+0.087816854 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:09:24 compute-0 podman[260718]: 2025-11-22 09:09:24.151419644 +0000 UTC m=+0.103855496 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 22 09:09:24 compute-0 podman[260719]: 2025-11-22 09:09:24.171760871 +0000 UTC m=+0.125398875 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 09:09:24 compute-0 podman[260720]: 2025-11-22 09:09:24.18209669 +0000 UTC m=+0.128107639 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 22 09:09:27 compute-0 nova_compute[189268]: 2025-11-22 09:09:27.987 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:28 compute-0 nova_compute[189268]: 2025-11-22 09:09:28.272 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:29 compute-0 podman[203476]: time="2025-11-22T09:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:09:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 09:09:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Nov 22 09:09:31 compute-0 openstack_network_exporter[205661]: ERROR   09:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:09:31 compute-0 openstack_network_exporter[205661]: ERROR   09:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:09:31 compute-0 openstack_network_exporter[205661]: ERROR   09:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:09:31 compute-0 openstack_network_exporter[205661]: ERROR   09:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:09:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:09:31 compute-0 openstack_network_exporter[205661]: ERROR   09:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:09:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:09:32 compute-0 podman[260799]: 2025-11-22 09:09:32.113339157 +0000 UTC m=+0.069093180 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 22 09:09:32 compute-0 nova_compute[189268]: 2025-11-22 09:09:32.989 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:33 compute-0 nova_compute[189268]: 2025-11-22 09:09:33.275 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:34 compute-0 podman[260821]: 2025-11-22 09:09:34.120060738 +0000 UTC m=+0.075909393 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 22 09:09:37 compute-0 nova_compute[189268]: 2025-11-22 09:09:37.992 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:38 compute-0 nova_compute[189268]: 2025-11-22 09:09:38.278 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:42 compute-0 podman[260845]: 2025-11-22 09:09:42.107036761 +0000 UTC m=+0.061476678 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 22 09:09:42 compute-0 podman[260846]: 2025-11-22 09:09:42.109544249 +0000 UTC m=+0.063290518 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 09:09:42 compute-0 podman[260844]: 2025-11-22 09:09:42.119759273 +0000 UTC m=+0.079605266 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 22 09:09:42 compute-0 nova_compute[189268]: 2025-11-22 09:09:42.997 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:43 compute-0 nova_compute[189268]: 2025-11-22 09:09:43.281 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:47 compute-0 nova_compute[189268]: 2025-11-22 09:09:47.996 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:48 compute-0 nova_compute[189268]: 2025-11-22 09:09:48.284 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:48 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:09:48.781 106642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:cf:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'd6:f7:8f:a1:cd:35'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 09:09:48 compute-0 nova_compute[189268]: 2025-11-22 09:09:48.782 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:48 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:09:48.783 106642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 09:09:49 compute-0 nova_compute[189268]: 2025-11-22 09:09:49.770 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:51 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:09:51.786 106642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e5f17f07-bc92-4131-bf96-5df2839ca4b0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 09:09:52 compute-0 nova_compute[189268]: 2025-11-22 09:09:52.998 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:53 compute-0 nova_compute[189268]: 2025-11-22 09:09:53.287 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:54 compute-0 nova_compute[189268]: 2025-11-22 09:09:54.720 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:55 compute-0 nova_compute[189268]: 2025-11-22 09:09:55.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:09:55 compute-0 podman[260906]: 2025-11-22 09:09:55.134813778 +0000 UTC m=+0.083938402 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-container, version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, release-0.7.12=)
Nov 22 09:09:55 compute-0 podman[260913]: 2025-11-22 09:09:55.147878751 +0000 UTC m=+0.080258704 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:09:55 compute-0 podman[260908]: 2025-11-22 09:09:55.180056858 +0000 UTC m=+0.116652405 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute)
Nov 22 09:09:55 compute-0 podman[260907]: 2025-11-22 09:09:55.207191328 +0000 UTC m=+0.151928963 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:09:58 compute-0 nova_compute[189268]: 2025-11-22 09:09:58.001 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:58 compute-0 nova_compute[189268]: 2025-11-22 09:09:58.290 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:09:59 compute-0 podman[203476]: time="2025-11-22T09:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:09:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 09:09:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Nov 22 09:10:00 compute-0 nova_compute[189268]: 2025-11-22 09:10:00.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:01 compute-0 openstack_network_exporter[205661]: ERROR   09:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:10:01 compute-0 openstack_network_exporter[205661]: ERROR   09:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:10:01 compute-0 openstack_network_exporter[205661]: ERROR   09:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:10:01 compute-0 openstack_network_exporter[205661]: ERROR   09:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:10:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:10:01 compute-0 openstack_network_exporter[205661]: ERROR   09:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:10:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:10:03 compute-0 nova_compute[189268]: 2025-11-22 09:10:03.003 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:03 compute-0 nova_compute[189268]: 2025-11-22 09:10:03.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:03 compute-0 podman[260986]: 2025-11-22 09:10:03.153687892 +0000 UTC m=+0.109875972 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container)
Nov 22 09:10:03 compute-0 nova_compute[189268]: 2025-11-22 09:10:03.293 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:04 compute-0 nova_compute[189268]: 2025-11-22 09:10:04.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:04 compute-0 nova_compute[189268]: 2025-11-22 09:10:04.101 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:10:05 compute-0 podman[261007]: 2025-11-22 09:10:05.12472006 +0000 UTC m=+0.075994038 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:10:06 compute-0 nova_compute[189268]: 2025-11-22 09:10:06.101 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:06 compute-0 nova_compute[189268]: 2025-11-22 09:10:06.102 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:10:06 compute-0 nova_compute[189268]: 2025-11-22 09:10:06.103 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:10:06 compute-0 nova_compute[189268]: 2025-11-22 09:10:06.118 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:10:08 compute-0 nova_compute[189268]: 2025-11-22 09:10:08.005 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:08 compute-0 nova_compute[189268]: 2025-11-22 09:10:08.296 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:09 compute-0 nova_compute[189268]: 2025-11-22 09:10:09.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:10:10.018 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:10:10.019 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:10:10.019 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:11 compute-0 nova_compute[189268]: 2025-11-22 09:10:11.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:13 compute-0 nova_compute[189268]: 2025-11-22 09:10:13.007 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:13 compute-0 podman[261033]: 2025-11-22 09:10:13.143726129 +0000 UTC m=+0.079578746 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 09:10:13 compute-0 podman[261031]: 2025-11-22 09:10:13.150986724 +0000 UTC m=+0.096965944 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 09:10:13 compute-0 podman[261032]: 2025-11-22 09:10:13.162569676 +0000 UTC m=+0.102270997 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 22 09:10:13 compute-0 nova_compute[189268]: 2025-11-22 09:10:13.299 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:18 compute-0 nova_compute[189268]: 2025-11-22 09:10:18.009 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:18 compute-0 nova_compute[189268]: 2025-11-22 09:10:18.302 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.101 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.102 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.102 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fbb81d4b800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb841ff170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.104 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83c3d9a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.105 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb834cca10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.105 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.106 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83ec0260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fbb81d4bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.107 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.107 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fbb81df80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.108 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.108 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fbb81d4bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.108 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.108 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fbb81d49820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.108 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fbb81df8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fbb81d49850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fbb81d4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fbb844a61b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.109 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.110 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fbb81d4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.110 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.110 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fbb81d4bd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.107 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.110 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81ed9b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb83498380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81df8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.111 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.112 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.112 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.112 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.112 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.112 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4bf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.112 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fbb81d4b7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fbb83c3cf50>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.110 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fbb81d4b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.113 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fbb81d4b860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.113 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fbb81d4b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.113 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fbb81d4b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fbb81d4b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fbb81d4b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fbb81df8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fbb81d4b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.114 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fbb81d4b500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fbb81d4bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fbb81d4b560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fbb81d4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.115 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.116 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fbb81d4bec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.116 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.116 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fbb81d4bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.116 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.116 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fbb81d4b7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fbb81f0fb90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.116 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.117 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.118 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.119 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:22 compute-0 ceilometer_agent_compute[200029]: 2025-11-22 09:10:22.119 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 22 09:10:23 compute-0 nova_compute[189268]: 2025-11-22 09:10:23.012 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:23 compute-0 nova_compute[189268]: 2025-11-22 09:10:23.305 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.124 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.125 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.126 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.507 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.509 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5349MB free_disk=72.42364501953125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.510 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.511 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.575 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.576 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.602 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.614 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.616 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:10:25 compute-0 nova_compute[189268]: 2025-11-22 09:10:25.616 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:10:26 compute-0 podman[261090]: 2025-11-22 09:10:26.132536535 +0000 UTC m=+0.080120499 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Nov 22 09:10:26 compute-0 podman[261093]: 2025-11-22 09:10:26.143693566 +0000 UTC m=+0.082195575 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 22 09:10:26 compute-0 podman[261092]: 2025-11-22 09:10:26.154750704 +0000 UTC m=+0.094950730 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 22 09:10:26 compute-0 podman[261091]: 2025-11-22 09:10:26.176974423 +0000 UTC m=+0.122693197 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 09:10:26 compute-0 nova_compute[189268]: 2025-11-22 09:10:26.611 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:28 compute-0 nova_compute[189268]: 2025-11-22 09:10:28.015 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:28 compute-0 ovn_controller[97783]: 2025-11-22T09:10:28Z|00183|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 22 09:10:28 compute-0 nova_compute[189268]: 2025-11-22 09:10:28.309 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:29 compute-0 podman[203476]: time="2025-11-22T09:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:10:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 09:10:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Nov 22 09:10:31 compute-0 openstack_network_exporter[205661]: ERROR   09:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:10:31 compute-0 openstack_network_exporter[205661]: ERROR   09:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:10:31 compute-0 openstack_network_exporter[205661]: ERROR   09:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:10:31 compute-0 openstack_network_exporter[205661]: ERROR   09:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:10:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:10:31 compute-0 openstack_network_exporter[205661]: ERROR   09:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:10:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:10:33 compute-0 nova_compute[189268]: 2025-11-22 09:10:33.019 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:33 compute-0 nova_compute[189268]: 2025-11-22 09:10:33.312 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:34 compute-0 podman[261172]: 2025-11-22 09:10:34.114977369 +0000 UTC m=+0.073061970 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, io.openshift.expose-services=)
Nov 22 09:10:36 compute-0 podman[261193]: 2025-11-22 09:10:36.139739445 +0000 UTC m=+0.089130573 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 22 09:10:38 compute-0 nova_compute[189268]: 2025-11-22 09:10:38.025 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:38 compute-0 nova_compute[189268]: 2025-11-22 09:10:38.315 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:43 compute-0 nova_compute[189268]: 2025-11-22 09:10:43.023 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:43 compute-0 nova_compute[189268]: 2025-11-22 09:10:43.317 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:44 compute-0 podman[261218]: 2025-11-22 09:10:44.145981539 +0000 UTC m=+0.081050985 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:10:44 compute-0 podman[261217]: 2025-11-22 09:10:44.161872327 +0000 UTC m=+0.110703344 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 09:10:44 compute-0 podman[261219]: 2025-11-22 09:10:44.170630323 +0000 UTC m=+0.103753817 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 09:10:48 compute-0 nova_compute[189268]: 2025-11-22 09:10:48.027 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:48 compute-0 nova_compute[189268]: 2025-11-22 09:10:48.321 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:53 compute-0 nova_compute[189268]: 2025-11-22 09:10:53.029 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:53 compute-0 nova_compute[189268]: 2025-11-22 09:10:53.324 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:55 compute-0 nova_compute[189268]: 2025-11-22 09:10:55.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:55 compute-0 nova_compute[189268]: 2025-11-22 09:10:55.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:10:57 compute-0 podman[261277]: 2025-11-22 09:10:57.123878942 +0000 UTC m=+0.077711076 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, architecture=x86_64, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, version=9.4, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container)
Nov 22 09:10:57 compute-0 podman[261279]: 2025-11-22 09:10:57.135133765 +0000 UTC m=+0.078618439 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 09:10:57 compute-0 podman[261278]: 2025-11-22 09:10:57.169964853 +0000 UTC m=+0.117743473 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 22 09:10:57 compute-0 podman[261283]: 2025-11-22 09:10:57.176923101 +0000 UTC m=+0.112645117 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 09:10:58 compute-0 nova_compute[189268]: 2025-11-22 09:10:58.030 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:58 compute-0 nova_compute[189268]: 2025-11-22 09:10:58.330 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:10:59 compute-0 podman[203476]: time="2025-11-22T09:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:10:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 09:10:59 compute-0 podman[203476]: @ - - [22/Nov/2025:09:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Nov 22 09:11:00 compute-0 nova_compute[189268]: 2025-11-22 09:11:00.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:01 compute-0 openstack_network_exporter[205661]: ERROR   09:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:11:01 compute-0 openstack_network_exporter[205661]: ERROR   09:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:11:01 compute-0 openstack_network_exporter[205661]: ERROR   09:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:11:01 compute-0 openstack_network_exporter[205661]: ERROR   09:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:11:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:11:01 compute-0 openstack_network_exporter[205661]: ERROR   09:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:11:01 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:11:03 compute-0 nova_compute[189268]: 2025-11-22 09:11:03.044 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:03 compute-0 nova_compute[189268]: 2025-11-22 09:11:03.333 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:04 compute-0 nova_compute[189268]: 2025-11-22 09:11:04.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:05 compute-0 podman[261357]: 2025-11-22 09:11:05.126859928 +0000 UTC m=+0.076804330 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Nov 22 09:11:06 compute-0 nova_compute[189268]: 2025-11-22 09:11:06.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:06 compute-0 nova_compute[189268]: 2025-11-22 09:11:06.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 09:11:07 compute-0 nova_compute[189268]: 2025-11-22 09:11:07.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:07 compute-0 nova_compute[189268]: 2025-11-22 09:11:07.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 09:11:07 compute-0 nova_compute[189268]: 2025-11-22 09:11:07.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 09:11:07 compute-0 nova_compute[189268]: 2025-11-22 09:11:07.114 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 09:11:07 compute-0 podman[261378]: 2025-11-22 09:11:07.114926705 +0000 UTC m=+0.065216917 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 22 09:11:08 compute-0 nova_compute[189268]: 2025-11-22 09:11:08.047 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:08 compute-0 nova_compute[189268]: 2025-11-22 09:11:08.336 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:09 compute-0 nova_compute[189268]: 2025-11-22 09:11:09.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:09 compute-0 sshd-session[261403]: Invalid user oracle from 80.94.92.164 port 46864
Nov 22 09:11:09 compute-0 sshd-session[261403]: Connection closed by invalid user oracle 80.94.92.164 port 46864 [preauth]
Nov 22 09:11:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:11:10.020 106642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:11:10.021 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:10 compute-0 ovn_metadata_agent[106637]: 2025-11-22 09:11:10.021 106642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:11 compute-0 nova_compute[189268]: 2025-11-22 09:11:11.098 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:12 compute-0 nova_compute[189268]: 2025-11-22 09:11:12.108 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:13 compute-0 nova_compute[189268]: 2025-11-22 09:11:13.049 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:13 compute-0 nova_compute[189268]: 2025-11-22 09:11:13.339 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:14 compute-0 podman[261406]: 2025-11-22 09:11:14.762410593 +0000 UTC m=+0.085881925 container health_status 2659037feb70b462e6a496e9f9943cd1b59ef2ad38bcf3fdf0ebd5390de75b30 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 22 09:11:14 compute-0 podman[261407]: 2025-11-22 09:11:14.784620742 +0000 UTC m=+0.103360717 container health_status b82e87bb702fd789332c9b179d252610054afef877181cdafc350fe12e9ebff4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:11:14 compute-0 podman[261405]: 2025-11-22 09:11:14.799173024 +0000 UTC m=+0.124468915 container health_status 02f0b7dbbd0d592dc47900c5933d9d18a0e199bc5d339cc8bb3733d2ec837878 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 22 09:11:18 compute-0 nova_compute[189268]: 2025-11-22 09:11:18.052 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:18 compute-0 nova_compute[189268]: 2025-11-22 09:11:18.342 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:23 compute-0 nova_compute[189268]: 2025-11-22 09:11:23.054 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:23 compute-0 nova_compute[189268]: 2025-11-22 09:11:23.344 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:24 compute-0 nova_compute[189268]: 2025-11-22 09:11:24.099 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:24 compute-0 nova_compute[189268]: 2025-11-22 09:11:24.099 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 09:11:25 compute-0 sshd-session[261469]: Accepted publickey for zuul from 192.168.122.10 port 33636 ssh2: ECDSA SHA256:eG+N1/41LOqGqG+a4j8P+CpFCwWXtPQK2mWsQjRSKN4
Nov 22 09:11:25 compute-0 systemd-logind[826]: New session 32 of user zuul.
Nov 22 09:11:25 compute-0 systemd[1]: Started Session 32 of User zuul.
Nov 22 09:11:25 compute-0 sshd-session[261469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 09:11:25 compute-0 sudo[261473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 22 09:11:25 compute-0 sudo[261473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 09:11:26 compute-0 nova_compute[189268]: 2025-11-22 09:11:26.121 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:26 compute-0 nova_compute[189268]: 2025-11-22 09:11:26.122 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 09:11:26 compute-0 nova_compute[189268]: 2025-11-22 09:11:26.135 189273 DEBUG nova.compute.manager [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.112 189273 DEBUG oslo_service.periodic_task [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.147 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.148 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.149 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.149 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.462 189273 WARNING nova.virt.libvirt.driver [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.464 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5319MB free_disk=72.42316055297852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.465 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.465 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.771 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 09:11:27 compute-0 nova_compute[189268]: 2025-11-22 09:11:27.772 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.056 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:28 compute-0 podman[261611]: 2025-11-22 09:11:28.135135764 +0000 UTC m=+0.081248530 container health_status c75207e5ade1c7391ebcad23e649d384d3ce001b15c676241e8a12f63848ed9d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a79a8f2ab21878d13a89fdbe145f3f6a, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 22 09:11:28 compute-0 podman[261606]: 2025-11-22 09:11:28.141822584 +0000 UTC m=+0.093854160 container health_status 03f85223c410055d44a15b250a110807422dfc8fd22b98a2dc5e93ecfef42a93 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30)
Nov 22 09:11:28 compute-0 podman[261612]: 2025-11-22 09:11:28.165851011 +0000 UTC m=+0.106507520 container health_status c75f601a9dec42f17ce46ef31052d0c66bc7d4be7cd9af52d3be2f8e878974cd (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 09:11:28 compute-0 podman[261610]: 2025-11-22 09:11:28.175975184 +0000 UTC m=+0.123592581 container health_status 3036b45c9960987358fa41670b3197bc1329bb48c680304f906d364a99ace96d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.312 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing inventories for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.346 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.463 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating ProviderTree inventory for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.464 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Updating inventory in ProviderTree for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.488 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing aggregate associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.516 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Refreshing trait associations for resource provider 699bf240-9d16-48c7-bff5-24c8bb8aac19, traits: COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.554 189273 DEBUG nova.compute.provider_tree [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed in ProviderTree for provider: 699bf240-9d16-48c7-bff5-24c8bb8aac19 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.572 189273 DEBUG nova.scheduler.client.report [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Inventory has not changed for provider 699bf240-9d16-48c7-bff5-24c8bb8aac19 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.574 189273 DEBUG nova.compute.resource_tracker [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 09:11:28 compute-0 nova_compute[189268]: 2025-11-22 09:11:28.574 189273 DEBUG oslo_concurrency.lockutils [None req-67d85589-f2ee-4163-9fb1-a1de4562259f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 09:11:29 compute-0 podman[203476]: time="2025-11-22T09:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 22 09:11:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Nov 22 09:11:29 compute-0 podman[203476]: @ - - [22/Nov/2025:09:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4347 "" "Go-http-client/1.1"
Nov 22 09:11:30 compute-0 ovs-vsctl[261718]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 09:11:31 compute-0 openstack_network_exporter[205661]: ERROR   09:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:11:31 compute-0 openstack_network_exporter[205661]: ERROR   09:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 22 09:11:31 compute-0 openstack_network_exporter[205661]: ERROR   09:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 22 09:11:31 compute-0 openstack_network_exporter[205661]: ERROR   09:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 22 09:11:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:11:31 compute-0 openstack_network_exporter[205661]: ERROR   09:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 22 09:11:31 compute-0 openstack_network_exporter[205661]: 
Nov 22 09:11:31 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 261497 (sos)
Nov 22 09:11:31 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 22 09:11:31 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 22 09:11:32 compute-0 virtqemud[189170]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 09:11:32 compute-0 virtqemud[189170]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 09:11:32 compute-0 virtqemud[189170]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 09:11:33 compute-0 nova_compute[189268]: 2025-11-22 09:11:33.058 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:33 compute-0 nova_compute[189268]: 2025-11-22 09:11:33.349 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:33 compute-0 crontab[262144]: (root) LIST (root)
Nov 22 09:11:36 compute-0 podman[262236]: 2025-11-22 09:11:36.158939222 +0000 UTC m=+0.093224074 container health_status 0f5001ff5a260d2f7ba7e1d39cce6aa2b00a67d2cf5150c85dddb37fdc408de4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 09:11:36 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 09:11:36 compute-0 systemd[1]: Started Hostname Service.
Nov 22 09:11:37 compute-0 podman[262374]: 2025-11-22 09:11:37.877763895 +0000 UTC m=+0.101896197 container health_status 213c4458e3095c907fb736fa971c90e33653e40a32eb54b0127c1720fcc88001 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 22 09:11:38 compute-0 nova_compute[189268]: 2025-11-22 09:11:38.059 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:38 compute-0 nova_compute[189268]: 2025-11-22 09:11:38.350 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:43 compute-0 nova_compute[189268]: 2025-11-22 09:11:43.061 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 09:11:43 compute-0 nova_compute[189268]: 2025-11-22 09:11:43.352 189273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
